FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Android Authority
  • Meta Quest headsets now support HDMI connections to let you stream anything on a big screenRushil Agrawal
    Credit: C. Scott Brown / Android Authority Meta has launched the Meta Quest HDMI Link app, enabling users to connect their headsets to HDMI and DisplayPort devices. The app supports Meta Quest 2, 3, and Pro headsets and offers 1080p resolution content with low latency. Setting up the feature requires an additional UVC and UAC-compatible capture card. Meta has rolled out a new app, Meta Quest HDMI Link, designed to extend the functionality of its VR headsets. The app is available for Met
     

Meta Quest headsets now support HDMI connections to let you stream anything on a big screen

17. Srpen 2024 v 22:20

Meta Quest 3 with Two Controllers On Table

Credit: C. Scott Brown / Android Authority

  • Meta has launched the Meta Quest HDMI Link app, enabling users to connect their headsets to HDMI and DisplayPort devices.
  • The app supports Meta Quest 2, 3, and Pro headsets and offers 1080p resolution content with low latency.
  • Setting up the feature requires an additional UVC and UAC-compatible capture card.


Meta has rolled out a new app, Meta Quest HDMI Link, designed to extend the functionality of its VR headsets. The app is available for Meta Quest 2, Quest 3, and Quest Pro models. It allows users to connect their headsets to various HDMI or DisplayPort-equipped devices, such as gaming consoles, laptops, and even smartphones, effectively turning the headset into a virtual display. (via The Verge)

The Quest headsets have traditionally been positioned as standalone devices for VR experiences. With HDMI Link, the company is acknowledging users’ desire to access content from other devices within their VR environment.

HDMI Link supports 1080p content with low latency, providing a customizable virtual screen experience. Users can resize and reposition this screen within the VR environment, making it suitable for gaming, watching movies, or even working privately.

Setting up HDMI Link, however, isn’t as straightforward as plugging a USB cable into your phone or gaming console. It requires a compatible capture card — a small device that connects your HDMI source to your headset via USB — adding an extra layer of complexity and cost. Meta acknowledges this in its blog post, cautioning users that it’s not quite as plug-and-play as they might expect.

Another important drawback to HDMI Link is its inability to display content protected by HDCP, which is common with many streaming services. This limitation means that while the feature is versatile, its utility for streaming movies and TV shows might be limited.

Meta emphasizes that HDMI Link is not meant to replace existing features like Air Link or Xbox Cloud Gaming, which offer wireless streaming options under ideal network conditions. Instead, it’s intended to provide a solution for situations where Wi-Fi is unreliable or unavailable or when users want to connect devices that aren’t supported by other methods. The app is currently available on App Lab, which indicates that it’s still in development and might need some work to iron out some kinks.

  • ✇GamesIndustry.biz Latest Articles Feed
  • Meta reportedly shutters Ready at DawnMarie Dealessandri
    Meta has reportedly shut down developer Ready at Dawn.That's according to Android Central, who had access to an internal memo sent to employees by VP of Oculus Studios Gio Hunt yesterday, reportedly saying that Ready at Dawn was closing effective immediately.A spokesperson told the publication that the closure wasn't to "save money" but to ensure Meta's VR division, Reality Labs, stays on target with its new budget restrictions and can deliver "better long-term impact." Read more
     

Meta reportedly shutters Ready at Dawn

Meta has reportedly shut down developer Ready at Dawn.

That's according to Android Central, who had access to an internal memo sent to employees by VP of Oculus Studios Gio Hunt yesterday, reportedly saying that Ready at Dawn was closing effective immediately.

A spokesperson told the publication that the closure wasn't to "save money" but to ensure Meta's VR division, Reality Labs, stays on target with its new budget restrictions and can deliver "better long-term impact."

Read more

Massively OP Podcast Episode 481: Pre-Gamescom MMO mini podcast

20. Srpen 2024 v 22:00
In this mini episode, Bree runs down Throne & Liberty's delay, New World's Aeternum beta, Guild Wars 2's Janthir Wilds launch, the Richard Garriott Ultima Online rumor, the state of Ultima Online New Legacy, Nightingale's Realms Rebuilt, the record-setting SWG Legends' SOEclipse event, and the approach of Gamescom.

💾

MMO Week in Review: Myth Drannor, Janthir Wilds ahoy, and Throne & Liberty delayed

19. Srpen 2024 v 02:10
This week in the world of MMORPGs tried its damnedest to make up for the week before (iykyk): Throne & Liberty got a two-week delay, New World picked a date for its Aeternum crossplay beta, Dungeons and Dragons Online launched its Myth Drannor expansion, and Richard Garriott suggested he’s trying to get Ultima Online back […]

Dungeonborne adds a 3v3 PvP mode, new class skill, and a new gear tier in latest update

18. Srpen 2024 v 17:00
The updates keep on a-comin’ for the free-to-play PvPvE extraction RPG Dungeonborne, which has a new patch out this week that introduces some new gameplay, a new healer ability, and a fresh set of gear to gather. The major piece of this update is Clotho’s Trial, a new limited-time PvP arena that pits two teams […]
  • ✇Mega Visions
  • PREVIEW: Hunt or be hunted in INVERSE [Meta Quest]David Maddox
    INVERSE, a survival horror VR game from developer MassVR, drops on the Meta Quest today. We’ll have a full review soon, but we got a sneak peek at the game and mechanics. We’ll give you the rundown on what you’ll be in for with this twist on the innovative asymmetric gameplay genre. INVERSE is not just a simple solo or multiplayer VR game. While yes, it does offer players the choice to embark on their own as either a lone monster hunter or to play as a team, the agents you play must survive d
     

PREVIEW: Hunt or be hunted in INVERSE [Meta Quest]

6. Září 2023 v 23:02

INVERSE, a survival horror VR game from developer MassVR, drops on the Meta Quest today. We’ll have a full review soon, but we got a sneak peek at the game and mechanics. We’ll give you the rundown on what you’ll be in for with this twist on the innovative asymmetric gameplay genre.

INVERSE is not just a simple solo or multiplayer VR game. While yes, it does offer players the choice to embark on their own as either a lone monster hunter or to play as a team, the agents you play must survive defenseless until power control terminals throughout the creepy facility are activated.

All the while, a twisted, monstrous entity called the Nul attacks at every opportunity. But who is this destructive creature… could it be one of your own?

Game design by those who love the genre

MassVR is a team of horror and sci-fi enthusiasts, so INVERSE offers a ‘spine-chilling and cinematic experience’ that immerses players in a relentless fight for survival. In a press release, Chris Lai, CEO and co-founder of the company, said: “INVERSE brings the essence of survival horror to VR, offering players an unmatched experience of fear, thrill, and strategic gameplay.”

The game makes a solid bold claim that ‘horror enthusiasts’ and ‘virtual reality aficionados’ should prepare themselves for a ‘chilling journey into the depths of fear and discover what awaits in the shadowy realms.’ And the visual setup certainly delivers. The look and feel of the abandoned station is quite creepy and just enough of a maze to keep players looking over their shoulder.

Agent… or monster?

The game supports up to four players working together as agents locating and unlocking weapons terminals and defeat the evil Nul. But that’s where the fun twist comes into play. One player actually assumes the role of the Nul, using unique abilities to hunt down and eliminate the agent!

But don’t worry if you don’t know enough people to fill a team. AI players can seamlessly fill in to make sure the experience is immersive regardless of the number of human players.

Players begin by completing tasks and surviving Nul attacks. Once your weapon is unlocked, it’s time to go on the offensive in Monster Hunt mode. INVERSE is unique in that all players play offense and defense in each match. This ensures that you must master both offensive and defensive tactics to succeed.

Don’t fear being alone

The solo player experience has been quite well thought out, too. During single-player matches, INVERSE presents players with plenty of challenges that test players’ skills. But they also offer a deeper exploration of the game’s lore and terror-filled world. The story the game is built around takes center stage.

Plus, the progressive leveling system allows players to unlock new skills and perks. These added bonuses keep the game interesting and allow players to be better equipped when facing the growing challenges as well as enhancing survival chances.

Face the ever growing darkness

MassVR has stated that they’re going to provide a “continuously evolving experience for players.” INVERSE is expected to receive bi-annual content updates to ensure players have new experiences. This will keep the game from getting monotonous, hopefully. Since the company started out in the physical VR space, console games certainly seem to be adding more versatility and the ability to grow to their repertoire.

Lai says, “We wanted to create a game that captures the intense emotions of playing a horror game with friends in real life while still delivering engaging gameplay for both solo and cooperative play styles in VR.” Time will tell if they succeeded, but INVERSE is certainly off to a good start.

The game is available on the Meta Quest today, Sept. 7. We’ll have a full review shortly.

Are you excited for this new horror multiplayer experience? Let us know in the comments.

The post PREVIEW: Hunt or be hunted in INVERSE [Meta Quest] appeared first on Mega Visions.

  • ✇NekoJonez's Gaming Blog
  • First Impression: Cave Digger 2 (PC – Steam) ~ No FeedbackNekoJonez
    Steam store page One of my favorite activities in Minecraft is going deep inside the caves and just exploring them. A few years ago, the developers behind Cave Digger reached out to me and asked me to review their game. Not too long after, the sequel got released and looked like it would be a VR exclusive. Until I noticed that it appeared on the Nintendo Switch eShop. So, I thought, maybe it also released on Steam, since after playing the Switch version, I felt like this game was better p
     

First Impression: Cave Digger 2 (PC – Steam) ~ No Feedback

Od: NekoJonez
12. Červenec 2024 v 19:00

Steam store page

One of my favorite activities in Minecraft is going deep inside the caves and just exploring them. A few years ago, the developers behind Cave Digger reached out to me and asked me to review their game. Not too long after, the sequel got released and looked like it would be a VR exclusive. Until I noticed that it appeared on the Nintendo Switch eShop. So, I thought, maybe it also released on Steam, since after playing the Switch version, I felt like this game was better played with keyboard and mouse. Now, a non VR version is on Steam now… But is it worth it? Well, after playing the first sections of this game, I want to talk about it. The latest update was on May 28th, 2024 when writing this article. Now, before we dive right into it, I want to invite to you leave a comment in the comment section with your thoughts and/or opinions on this game and/or the content of this article.

Risk of Staleness

In this game, we play as an unnamed miner who is throwing into the deep end, when his digger broke. You arrive at a mysterious valley. In this valley, a hardy explorer once did his research. But why? Which secrets are in these valleys and the accompanying mines? That’s for our miner to figure out. Now, the story is being told by various comic book pages you can uncover and, according to the Steam store page, has multiple endings. I’m quite curious where it’s going to go.

So far, I haven’t gotten too deep into the story. But, from what I can read on the Steam store page, I think it has potential. I have my doubts on how the multiple endings will work. Since comic books mostly have one ending, right? Unless, it all depends on which page(s) you find or in which order or where. That’s something I’ll discover when I’m deeper into the game.

If this game is like the original game, the story overall will take a backseat for the gameplay. And after 5 hours in, that’s the case. The original game didn’t have a lot of story to begin with, but more story in a game like this can be interesting.

There is one voice actor in this game. He does a pretty fine job and brings some life to the atmosphere. I replayed a bit of the first game and I have to be honest, I appreciate the small voice lines during the exploration. Even when you quickly hear every different line, it’s a nice break since they aren’t spammed and don’t appear that often.

One of the biggest changes in this game is that the cave this time around is randomly generated each time you enter. So, this game becomes a rouge like to a degree. But, you can always exit via the lifts to safety. Since, dying in the caves means that at least half of your obtained loot is dropped. The atmosphere this time around is very cohesive. This game presents itself as a sci-fi western game, and it really feels like that. Something I really like in this game is that it doesn’t go overboard in the sci-fi genre and stays grounded. The technology could realistically exist today, apart from the unique enemies in the cave, that is.

With the story taking more of a backseat, it’s quite important that the gameplay loop is enjoyable. The gameplay loop is simple, you have to explore the caves with 4 chosen tools. The three slots above the entrance give you a hint on which tools you will need to bring to gather the most loot. You take the lift down and gather loot, while fighting enemies and avoiding pitfalls to survive. The goal is also to find the other elevator that takes you down to the next level to gather even more valuable ores to bring to the top. You have to fill in the ores you gathered into the grinder to buy upgrades to your tools and environment to progress.

The big risk with this kind of gameplay loop is that this is just a different numbers game. What I mean by that is that, apart from maybe the visuals changing, the core concept is always the same. This risks that the game becomes stale and repetitive. It’s possible that it is just a “me thing”, but I enjoy games like this more when there are some variations on the gameplay or some different puzzles. Thankfully, this game has that. There are a lot of things you can upgrade and improve to make each run feel rewarding, and each type of cave you can visit has different enemies types and unique lay-outs to keep you on your toes. In a way, I dare to compare the idea a bit to Cult of the Lamb in a degree.

The music in this game is also a blast. It fits the atmosphere of each area like a glove. My favorite track is the track that plays in the lake caves. It sounds like you image a typical track like that to sound. And it gets more intense while you are fighting enemies down there. Now, the silent moments when the music doesn’t play feel a bit long, but I always know that there is more music coming and that it fits the atmosphere perfectly and draws me more into the game. Sadly enough, this isn’t the only problem with this game, and I’d like to talk about them.

No feedback

This game has an addictive gameplay loop, and I’m really curious how the multiplayer works. I haven’t tested the multiplayer in this game, but it looks like fun. Now, this game can be played solo perfectly fine.

Now, I don’t know if VRKiwi took the VR version as a base for the non VR version, since I have the impression, that is the case. I especially notice that with the controls in this game. It feels a bit floaty, like you aren’t really connected to the ground. It also feels a bit stiff, like you have to move your mouse like you would a VR headset. You really have to play with the settings until you hit that sweetspot that feels right for you. For me, I had to lower the sensitivity to 80, amongst other things. I highly recommend that you tweak the settings to your liking, since on the Nintendo Switch version, I had to lower the sensitivity to 40 before it felt right.

Still, the character control doesn’t feel right. At first, I thought it was because the controls felt floaty… But, after some testing, I think I found a few other problems with the character control that might cause it to not feel quite right. First, the jump in this game is just silly. You can’t really rely on it, since it doesn’t always trigger when you hit the spacebar, and it’s just a pathetic jump. You can’t even jump out of ankle high water sometimes.

Secondly, there are no sound effects for walking on most floors. You feel like you are floating, and it’s jarring when you suddenly hear a sound effect when you walk over a table or a railway. Thirdly, climbing on ropes amongst other things is just insanely picky. There is also no real feedback or sound to show you grabbed the rope. Fourthly, the scroll order between tools is extremely weird. You get numbers on the wheel counter clock wise. But you go down, right, left, up. Which still confuses me after 6 hours of playing this game.

And finally, some things are extremely picky. For example, there are safe riddles you can solve down in the caves. But to rotate the letter wheels to make pick the right letter is more difficult to do. All of these things give you a feeling that you aren’t always in control of your character and that you don’t get the feedback as a player on what’s happening. Making you unsure what’s happening and doubt if you are doing the right thing.

Prompts like “Use W/S to use the crank” should be “Hold W/S to use the crank”. Since, you need to hold the key instead of pressing it. Small things like that could also improve this game and it’s controls quite a lot. Overall, the controls are good, but they lack feedback to the player sometimes. Either with sound effects or with some visual effects. Like with the hammer, you barely have any sound effects when you use it, and it has some wind up animation, making you unsure if you are using it or not.

That is one of the biggest flaws in this game. The lack of feedback on your actions. Things like not knowing how many bullets are still left in your revolver or a sound effect when you hit an actual enemy. I think if there is one thing I’d use the built-in feedback tool is to report various cases/moments when I expect feedback from the game, like a sound effect or visual effect. Maybe they appear in the form of rumble effects… But, I’m not playing this game with a controller.

When you read this section of the article, I wouldn’t blame you if you think that this game isn’t good. Small bugs like the text of “Press R to reload” when your gun isn’t equipped or the bullets not leaving from the gun but from the player model don’t improve things either. Yet, I find myself looking past these problems since the core gameplay still works. I find myself getting used to the jank in this game and finding a very rough diamond. If the developers keep up with their promise of improving this game, I think that more action feedback will bring a lot to the game and maybe fixing the small bugs like in this paragraph as well.

Things like the animation of the shovel looking weird sometimes. The animation looks like the arms go through each other after a dig. Speaking of the shovel, the last dig is annoying since you have to move a pixel or two for it to count and give you your goodies. But the bug I’d love to see fixed most is the freeze for several seconds when you pick up something new or get a new codec entry. The game locks up like it’s about the crash, but it doesn’t.

What’s next for us?

Usually, I’m not really picky when it comes to the visuals of a game. As long as a game looks consistent, I’m quite happy. It needs to have a certain style so that you can quickly identify what’s what and enjoy the game.

Yet, for this game, I do have some things that I not really like in terms of the visuals. Firstly, the contrast of some ores and the floor isn’t clear enough. Sometimes I was passing up on ores since I wasn’t able to notice them on the ground.

There are also a lot of objects to give more details to the cave, but you can barely interact with them. I’d love to see lilly pads in lakes to move a bit when you walk past them or something more than just being able to clip through them. As well, a sound effect when you hit a wall you can’t mine. You get shouted at when you use the wrong or a too weak tool on something, so when not for the rest?

I think the biggest mistake that the visuals make is that it has an identity crisis. What I mean by that is that it isn’t a cohesive style. There is a lot of shell shading going on, but there is also a lot of details that give off a more realistic vibe. Some textures aren’t detailed enough and strechted too wide giving wrong impression the rest of the visuals that look more modern. The floor textures sometimes suffer most from this issue.

Looking back at this article, I think I’m being very critical for this game. I have played a lot worse and broken games for 15€. But, in this game you even have customisation options for your character and thee developers are extremely open for feedback. This game has a lot going for it. Fun achievements to hunt for, bosses at the end of runs and an amazing auto save system.

Apart from improving the character controls and adding some feedback on actions, I think this game is pretty decent. Yes, there is some polish missing like not having a tooltip with the lever at the cave entrance on what that lever does. I personally feel less conflicted about this game compared to the original. The growth in this title is immense and brings me a lot of hope for either some amazing updates, DLC or a new entry in the series.

The basis of for an amazing title is here and if you look past it’s short comings, this game is a blast to play. Maybe it’s a bit too repetitive for some and can be more fun in short bursts. But, when this game sinks it’s hooks into you, it really clicks. There is some polishing left to do and for a rather new VR focused developer, this is amazing. It’s their second non VR game and it shows a lot of promise.

The game is a perfect relaxing game to wind down, since it isn’t too difficult. The game is rather forgiving. I wouldn’t be surprised that I play this game after work to wind down and try and finish it slowly. Then again, while I’m writing this, I have summer holidays and I wouldn’t be surprised that I finish most of this game during my summer break.

Like I said earlier, I feel less conflicted about this game compared to the previous title. This game has a lot more going for it compared to the original. It’s less repetitive and it has a lot more going for it. It has it’s problems, yes. But, if you enjoy games like Minecraft, Steamworld Dig or Cave Digger, give the demo of this game a chance. The demo gives a very good idea on what you can expect from this game and if you enjoy it, buy the game. I’m enjoying myself quite a lot with this game and I’m happy that I have chosen the PC version over the Switch version since I feel like it just plays better. But maybe, if I get used to the Switch controls, I might enjoy it on Switch as well.

With that said, I have said everything I wanted to say about this game for now. Maybe when I finish this game, I might write a full review with the final thoughts and opinions on this game. But for now, I think the best conclusion for this game is that it’s an amazing step up from the original and besides some unpolished things… It’s a great game and comes recommend from me.

So, it’s time to wrap up this article with my usual outro. I hope you enjoyed reading it as much as I enjoyed writing it. I hope to be able to welcome you in another article, but until then have a great rest of your day and take care.

  • ✇AmigaGuru's GamerBlog
  • STILT – Jumping Flash In VR?TheAmigaGuru
    The idea of experiencing a first-person platformer in VR initially seemed daunting, but in truth, it turned out to be one of the most enjoyable VR experiences I've had in quite some time. Silky smooth framerate, excellent yet simple graphics set in a universe that I would never have visited if the world was flat. Could it truly be as remarkable as it sounds? Source
     

STILT – Jumping Flash In VR?

5. Březen 2024 v 09:55
The idea of experiencing a first-person platformer in VR initially seemed daunting, but in truth, it turned out to be one of the most enjoyable VR experiences I've had in quite some time. Silky smooth framerate, excellent yet simple graphics set in a universe that I would never have visited if the world was flat. Could it truly be as remarkable as it sounds?

Source

Now This Is Borderland Madness: Gazzlers VR

27. Říjen 2023 v 16:07
Gazzlers, a railshooter shooting gallery type of game with roguelite mechanics. it comes with a good levelling system, loads of powerups and it totally works like if it was made as an Coin-Op game that belong in your local Arcade hall...

Source

  • ✇Massively Overpowered
  • MMO Week in Review: Bungie’s bungles, Fellowship’s fanciesBree Royce
    Destiny 2’s Bungie volunteered to be the industry villain of the week as it slashed multiple games and teams with cancelations and layoffs as part of cost-cutting measures to rescue the Sony-owned studio from Bungie executives’ management blunders. Meanwhile, we took a look at Fellowship, pondered what theoretical World of Warcraft housing could look like, […]
     

MMO Week in Review: Bungie’s bungles, Fellowship’s fancies

5. Srpen 2024 v 02:00
Destiny 2’s Bungie volunteered to be the industry villain of the week as it slashed multiple games and teams with cancelations and layoffs as part of cost-cutting measures to rescue the Sony-owned studio from Bungie executives’ management blunders. Meanwhile, we took a look at Fellowship, pondered what theoretical World of Warcraft housing could look like, […]
  • ✇GamesIndustry.biz Latest Articles Feed
  • Meta records loss of $4.5bn in Reality Labs during Q2Sophie McEvoy
    Meta published its second quarter earnings today, and has recorded a loss of $4.5 billion in its AR/VR Reality Labs division.This is the second reported loss of the year in this segment, with Reality Labs reporting a loss of $3.8 billion during its first quarter. Its Q2 revenue was up 28% year-over-year, largely driven by sales of Quest headsets.Expenses in this segment increased 21% year-over-year to $4.8 billion due to "higher-headcount related expenses and Reality Labs inventory costs." Read
     

Meta records loss of $4.5bn in Reality Labs during Q2

Meta published its second quarter earnings today, and has recorded a loss of $4.5 billion in its AR/VR Reality Labs division.

This is the second reported loss of the year in this segment, with Reality Labs reporting a loss of $3.8 billion during its first quarter. Its Q2 revenue was up 28% year-over-year, largely driven by sales of Quest headsets.

Expenses in this segment increased 21% year-over-year to $4.8 billion due to "higher-headcount related expenses and Reality Labs inventory costs."

Read more

  • ✇Android Authority
  • WhatsApp’s new feature makes chatting with AI easier for bad textersRushil Agrawal
    Credit: Hadlee Simons / Android Authority WhatsApp is adding a new feature, allowing users to talk with Meta AI through voice messages. Before this update, conversations with Meta AI on WhatsApp were limited to text and images. This feature is currently available to a limited number of beta testers. It seems like 2024 will be remembered as the year when every tech giant raced to make its version of AI chatbot a staple of our daily lives. While Google Gemini and ChatGPT are currently the
     

WhatsApp’s new feature makes chatting with AI easier for bad texters

1. Srpen 2024 v 23:50

WhatsApp notifications in settings menu

Credit: Hadlee Simons / Android Authority

  • WhatsApp is adding a new feature, allowing users to talk with Meta AI through voice messages.
  • Before this update, conversations with Meta AI on WhatsApp were limited to text and images.
  • This feature is currently available to a limited number of beta testers.


It seems like 2024 will be remembered as the year when every tech giant raced to make its version of AI chatbot a staple of our daily lives. While Google Gemini and ChatGPT are currently the top contenders, Meta has been slowly integrating Meta AI into its popular apps — Instagram, Facebook, and WhatsApp — making it easier than ever to chat with Meta’s take on an AI chatbot.

But what about the rest of us who don’t have the patience to type in long, tedious prompts for AI chatbots? I’m glad you asked because WaBetaInfo has uncovered a new feature in the WhatsApp beta for Android (version 2.24.16.10) that will allow users to send voice messages to Meta AI. Previously, communication with Meta AI was limited to text and image-based interactions.

WA SEND VOICE MESSAGE FEATURE META AI CHAT ANDROID

Credit: WaBetaInfo

A screenshot shared by the publication gives us a glimpse of what this might look like, with a voice message button appearing right in the Meta AI chat interface. This suggests that sending voice messages to Meta AI could work very similarly to how it works in regular conversations. While there are no details about the languages that Meta AI will support for voice messaging, given WhatsApp’s popularity in South Asian countries, Meta is likely to prioritize multilingual support.

The big question now is whether the new voice chat feature will enable Meta AI to perform specific functions within WhatsApp, such as replying to messages or suggesting responses, or if it will primarily act as a general voice assistant for tasks like web searches and recommendations. If it leans towards the latter, Meta AI may face stiff competition from the default voice assistants that our smartphones already come with.

We’ll be eager to see how this new feature unfolds in the coming months. The voice chat feature is currently available to a limited number of beta testers, but it should soon be available to a broader user base.

Meta unveils it’s AITemplate GPU framework

3. Říjen 2022 v 19:00

Meta is announcing their new AITemplate framework for GPUs.
Read more


The post Meta unveils it’s AITemplate GPU framework appeared first on SemiAccurate.

  • ✇Android Authority
  • WhatsApp’s new feature makes chatting with AI easier for bad textersRushil Agrawal
    Credit: Hadlee Simons / Android Authority WhatsApp is adding a new feature, allowing users to talk with Meta AI through voice messages. Before this update, conversations with Meta AI on WhatsApp were limited to text and images. This feature is currently available to a limited number of beta testers. It seems like 2024 will be remembered as the year when every tech giant raced to make its version of AI chatbot a staple of our daily lives. While Google Gemini and ChatGPT are currently the
     

WhatsApp’s new feature makes chatting with AI easier for bad texters

1. Srpen 2024 v 23:50

WhatsApp notifications in settings menu

Credit: Hadlee Simons / Android Authority

  • WhatsApp is adding a new feature, allowing users to talk with Meta AI through voice messages.
  • Before this update, conversations with Meta AI on WhatsApp were limited to text and images.
  • This feature is currently available to a limited number of beta testers.


It seems like 2024 will be remembered as the year when every tech giant raced to make its version of AI chatbot a staple of our daily lives. While Google Gemini and ChatGPT are currently the top contenders, Meta has been slowly integrating Meta AI into its popular apps — Instagram, Facebook, and WhatsApp — making it easier than ever to chat with Meta’s take on an AI chatbot.

But what about the rest of us who don’t have the patience to type in long, tedious prompts for AI chatbots? I’m glad you asked because WaBetaInfo has uncovered a new feature in the WhatsApp beta for Android (version 2.24.16.10) that will allow users to send voice messages to Meta AI. Previously, communication with Meta AI was limited to text and image-based interactions.

WA SEND VOICE MESSAGE FEATURE META AI CHAT ANDROID

Credit: WaBetaInfo

A screenshot shared by the publication gives us a glimpse of what this might look like, with a voice message button appearing right in the Meta AI chat interface. This suggests that sending voice messages to Meta AI could work very similarly to how it works in regular conversations. While there are no details about the languages that Meta AI will support for voice messaging, given WhatsApp’s popularity in South Asian countries, Meta is likely to prioritize multilingual support.

The big question now is whether the new voice chat feature will enable Meta AI to perform specific functions within WhatsApp, such as replying to messages or suggesting responses, or if it will primarily act as a general voice assistant for tasks like web searches and recommendations. If it leans towards the latter, Meta AI may face stiff competition from the default voice assistants that our smartphones already come with.

We’ll be eager to see how this new feature unfolds in the coming months. The voice chat feature is currently available to a limited number of beta testers, but it should soon be available to a broader user base.

  • ✇Android Authority
  • Meta AI celebrity chatbots have been exactly as popular as you’d expectStephen Schenck
    10 months after their inception, Meta has canceled its 28 celebrity chatbots The AI-powered accounts had been featured on both Facebook and Instagram. The way some companies with big AI dreams are thinking, smartphone users want nothing more than to get advice from, be entertained by, and interact with virtual chatbots. Meta is betting so big on this concept that it just launched a major effort to let people design custom AI chatbots, tailored precisely to their preferences. But as firms lik
     

Meta AI celebrity chatbots have been exactly as popular as you’d expect

31. Červenec 2024 v 23:06

  • 10 months after their inception, Meta has canceled its 28 celebrity chatbots
  • The AI-powered accounts had been featured on both Facebook and Instagram.


The way some companies with big AI dreams are thinking, smartphone users want nothing more than to get advice from, be entertained by, and interact with virtual chatbots. Meta is betting so big on this concept that it just launched a major effort to let people design custom AI chatbots, tailored precisely to their preferences. But as firms like Meta try to zero-in on the kind of AI interactions that are most engaging, they’re also picking up a lot of lessons about what doesn’t work. And as Meta apparently learned the hard way, that includes chatbots based on celebrities.

Meta introduced its celeb chatbots last September with a total of 28 accounts, all featuring the likeness of a famous person and given specific personas. The whole thing was a bit weird, not even using actual celebrity names: you might interact with “Lorena” the travel expert, based on Padma Lakshmi, or chat with Charli D’Amelio’s “Coco” the dancer. At least those track tonally with the people they’re based on, but others felt like wild swings: Paris Hilton as “Amber” the detective “for solving whodunnits,” or Snoop Dogg not as the resident cannabis sommelier but “Dungeon Master,” ready to help plan your next tabletop adventure.

Paris Hilton not being a detective.

Credit: Meta

Fast-forward ten months, and Meta is taking these chatbots back behind the shed for some Old Yeller action. The Information reports that the Facebook and Instagram pages for all these bots went offline earlier this week. The company confirmed to the site it had discontinued the feature, highlighting what it picked up from the experience, explaining, “We took a lot of learnings from building them and Meta AI to understand how people can use AIs to connect and create in unique ways.”

Based on the timing, we’d certainly hope that many of those lessons became the foundation for Meta’s new AI Studio offering, where rather than choosing from all these pre-packaged disparate personas, users can take the time to craft a custom experience. Maybe that one won’t last, either, but sometimes you have to learn what doesn’t work before you can figure out what does.

  • ✇Android Authority
  • Ray-Ban Meta Smart Glasses’ new update triples recording time limitVinayak Guha
      The Ray-Ban Meta Smart Glasses now allow video recording for up to three minutes. The latest update includes support for Amazon Music and Calm. These features can only be accessed by iOS users; Android support is expected soon. A number of companies, from Amazon to Google, offer smart glasses or have them in the works. But the Ray-Ban Meta Smart Glasses are still one of the top options in this competitive market. 
     

Ray-Ban Meta Smart Glasses’ new update triples recording time limit

24. Červen 2024 v 18:13

 

  • The Ray-Ban Meta Smart Glasses now allow video recording for up to three minutes.
  • The latest update includes support for Amazon Music and Calm.
  • These features can only be accessed by iOS users; Android support is expected soon.

A number of companies, from Amazon to Google, offer smart glasses or have them in the works. But the Ray-Ban Meta Smart Glasses are still one of the top options in this competitive market. 

  • ✇Techdirt
  • Ctrl-Alt-Speech: This Podcast May Be Hazardous To Moral PanicsLeigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms (New York Times
     

Ctrl-Alt-Speech: This Podcast May Be Hazardous To Moral Panics

21. Červen 2024 v 23:35

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

  • ✇PCGamesN
  • Grab eight VR games worth $189 for just $10, if you’re quickNiall Walsh
    There's no shortage of content available via Humble Bundle in 2024, and this latest showcase is one for fans of virtual reality. Devolver Digital is offering eight of its top VR titles in one simple package, with the proceeds being donated to Special Effect. Many of the best VR games are available on a variety of platforms, but all these titles must be redeemed on Steam. If you have a Meta Quest 3, or any other PCVR-compatible headset, all these games will work great if you're either te
     

Grab eight VR games worth $189 for just $10, if you’re quick

24. Červen 2024 v 15:40
Grab eight VR games worth $189 for just $10, if you’re quick

There's no shortage of content available via Humble Bundle in 2024, and this latest showcase is one for fans of virtual reality. Devolver Digital is offering eight of its top VR titles in one simple package, with the proceeds being donated to Special Effect.

Many of the best VR games are available on a variety of platforms, but all these titles must be redeemed on Steam. If you have a Meta Quest 3, or any other PCVR-compatible headset, all these games will work great if you're either tethered to your gaming PC, or using it wirelessly via Steam Link.

MORE FROM PCGAMESN: Steam FAQ, Steam family sharing, Steam in-home streaming
  • ✇Ars Technica - All content
  • Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarksBenj Edwards
    Enlarge (credit: Anthropic / Benj Edwards) On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work d
     

Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarks

20. Červen 2024 v 23:04
The Anthropic Claude 3 logo, jazzed up by Benj Edwards.

Enlarge (credit: Anthropic / Benj Edwards)

On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window.

So far, people outside of Anthropic seem impressed. "This model is really, really good," wrote independent AI researcher Simon Willison on X. "I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump)."

As we've written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).

Read 17 remaining paragraphs | Comments

  • ✇GAME PRESS
  • TEST: KIWI Design Link kabely (5m) pro připojení VR headsetů k PCJiří Morávek
    Pokročilejší uživatelé VR headsetů již jistě tuší, k čemu se takové Link kabely mohou hodit. Kvalita hraní PCVR titulů je přímo úměrná kvalitě propojovacího kabelu a není tak divu, že je na trhu po těchto produktech velká poptávka, a to především z herní komunity. V minulém testu produktů od společnosti KIWI Design jsme se podívali na SPC Strap s výkonnou baterií pro Quest 2 a 3 a vertikální RGB stojánek s podporou single-point-charging technologie. Dnes se podíváme na neméně důležité součásti d
     

TEST: KIWI Design Link kabely (5m) pro připojení VR headsetů k PC

9. Červen 2024 v 16:59

Pokročilejší uživatelé VR headsetů již jistě tuší, k čemu se takové Link kabely mohou hodit. Kvalita hraní PCVR titulů je přímo úměrná kvalitě propojovacího kabelu a není tak divu, že je na trhu po těchto produktech velká poptávka, a to především z herní komunity.

V minulém testu produktů od společnosti KIWI Design jsme se podívali na SPC Strap s výkonnou baterií pro Quest 2 a 3 a vertikální RGB stojánek s podporou single-point-charging technologie. Dnes se podíváme na neméně důležité součásti domácího VR ekosystému. K dispozici máme dva druhy propojovacích kabelů.

 

Oba modely měří úctyhodných 5 metrů a na první pohled je vcelku komplikované je od sebe rozeznat. Rozdíly jsou v nich ale markantní a i jejich správné použití se liší. V segmentu VR Link kabelů nejde jen o design, přenosovou rychlost, ale v neposlední řadě také o odolnost.

Link Cable 16FT  (kompatibilní s Questy a Pico 4)                                                                                                                                 6/10

Jako prvnímu se podíváme na zoubek propojovacímu kabelu, který je z dnešní dvojce ten levnější. Výrobce slibuje kompatibilitu s Meta Quest 1, 2, 3 a novým modelem od značky Pico. USB-A na USB-C slouží k jednoduchému propojení VR headsetu s vaším počítačem, díky čemuž si můžete užít PCVR hry snadno a rychle. KIWI Design ale varují, že tento typ kabelu není vhodný pro nabíjení, má však jiné přednosti.

Díky USB 3.0 se přenosová rychlost pohybuje od 2.5 až do 3.2Gbps přičemž tyto hodnoty závisí na výkonu vašeho PC. Zmiňovat hraní VR her a být přitom připojený k PC je věc, která má své výhody i nevýhody. Pokud jste si koupili Meta Quest i právě díky absenci kabelů a nutnosti je používat, tak vás možná tato možnost ani neláká. Quest ale po připojení k PC může nabídnout mnohem ostřejší obraz. Před Questem 3 jsem používal PS VR a vždy jsem měl neustále obavy o kabel, který se neustále někde motal a já na něj šlapal nebo za něj nechtěně tahal a není tak divu, že jsem musel headset poslat na reklamaci právě kvůli kabelu…

KIWI Design ve svém popisu produktu tvrdí, že byl kabel testován 5000x a při standardním používání by měl uživatelům vydržet až 6 let. Z logiky věci je jasné, že tuto skutečnost nejsem schopen otestovat. Na jednu stranu si neumím představit, že nám dnešní přenosové rychlosti budou za nějakých 6 let stačit, ale na druhou stranu mě tuhost a kvalita zpracování kabelu relativně uklidňovala. Na mysli mám především pružné koncovky, které se sice zprvu nezdály moc elastické, ale po pár dnech používání se rozhýbaly a nemůžu si je vynachválit.

Samozřejmostí je koncovka USB-C vytvořena do tvaru L tak, aby vše zapadalo nejen do dnešních VR standardů, ale především aby vše sedělo do ekosystému výrobků KIWI Design. Ačkoliv se jedná o prémiový kabel, který toho umí více, než se na první pohled může zdát, tak jej vozím jen v cestovním VR kufříku. Onu zodpovědnost propojovat můj Quest 3 s PC si totiž zasloužil druhý kabel, který nám Kiwi poskytnuli.

Link Cable 16FT s kabelovou sponkou                                                                                                                                                  9/10

Oproti kabelu, se kterým jsem vás seznámil výše, zde najdeme několik málo rozdílů. Jedním z těch méně nápadných, o to ale zásadnějších je kompatibilita. V popisu produktu zde nenajdeme jen klasické Meta Questy a Pico 4, ale navíc i Meta Quest Pro. To na první pohled výraznější změnou je obsah balení. V něm totiž nalezneme dvě kabelové sponky. Jedna z nich slouží k lepšímu vedení kabelu do napájecího portu headsetu a je tak opatřen páskem se suchým zipem. Díky tomuto mechanismu byste měli býti schopno jej připevnit na téměř jakýkoliv popruh třetích stran. (všechny popruhy od KIWI Design je mají v základním balení)

 

Druhou sponku pak pomocí 3M lepky připevníte přímo na váš stolní počítač. Kabel je tak chráněn na obou stranách a v kombinaci s jeho odolností (pocínovaný plášť) a flexibilitou by vám ve hraní divokých stříleček či jiných akčních titulů nemělo nic bránit.

Pojďme ale zpět k tvrdým faktům a datům. Maximální přenosová rychlost tohoto Link kabelu je úctyhodných 5Gbps. Oproti výše testovanému jmenovci je tento model vhodný i pro nabíjení a zvládne skousnout napětí až 3A a vy se tak nemusíte strachovat o stav baterie headsetu. KIWI Design na svých stránkách upozorňuje, že byste si před zakoupením měli zkontrolovat, zda tento kabel podporuje vámi používanou grafickou kartu.

Na závěr bych měl také zmínit samotnou délku. Testované kabely měřili oba celých 5 metrů, ale vybrat si můžete ze dvou délek. Musím ale zmínit, že délka 5 metrů byla zcela vyhovující pouze v případě kabelu, kterému jsme se věnovali jako druhému. A to díky jeho měkkosti a ohebnosti. První recenzovaný kabel naopak působil pevněji, ale hůře se s ním operovalo a v jeho případě by mi nevadila délka kratší.

Upřímně si nejsem zcela jistý, jak velký je na trhu hlad po tomto typů kabelů, ale platí zde stejná formule, jako u drtivé většiny VR příslušenství. Oba dnes testované kabely mají kvalitativně navrch oproti originálnímu Link kabelu od Oculusu, potažmo Mety. Stejně jako u popruhů na hlavu je i zde ten rozdíl vcelku markantní a pokud patříte do cílové skupiny a podobný kabel potřebuje, tak s těmito dvěma určitě nešlápnete vedle. Odolnost, přenosová rychlost a flexibilita jsou těmi hlavními selling-pointy a také hlavními body, ve kterých mají náskok před konkurencí v podobě Mety.

Test SPC strapu s baterií a RGB vertikálního stojánku od KIWI Design si můžete připomenou níže.

TEST: KIWI Design RGB Nabíjecí stanice a SPC Battery Strap pro Meta Quest 3

 

Článek TEST: KIWI Design Link kabely (5m) pro připojení VR headsetů k PC se nejdříve objevil na GAME PRESS.

  • ✇Semiconductor Engineering
  • Chip Industry Week In ReviewThe SE Staff
    JEDEC and the Open Compute Project rolled out a new set of guidelines for standardizing chiplet characterization details, such as thermal properties, physical and mechanical requirements, and behavior specs. Those details have been a sticking point for commercial chiplets, because without them it’s not possible to choose the best chiplet for a particular application or workload. The guidelines are a prerequisite for a multi-vendor chiplet marketplace. AMD, Broadcom, Cisco, Google, HPE, Intel, Me
     

Chip Industry Week In Review

31. Květen 2024 v 09:01

JEDEC and the Open Compute Project rolled out a new set of guidelines for standardizing chiplet characterization details, such as thermal properties, physical and mechanical requirements, and behavior specs. Those details have been a sticking point for commercial chiplets, because without them it’s not possible to choose the best chiplet for a particular application or workload. The guidelines are a prerequisite for a multi-vendor chiplet marketplace.

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta, and Microsoft proposed a new high-speed, low-latency interconnect specification, Ultra Accelerator Link (UALink), between accelerators and switches in AI computing pods. The 1.0 specification will enable the connection of up to 1,024 accelerators within a pod and allow for direct loads and stores between the memory attached to accelerators.

Arm debuted a range of new CPUs, including the Cortex-X925 for on-device generative AI, and the Cortex-A725 with improved efficiency for AI and mobile gaming. It also announced the Immortalis-G925 GPU for flagship smartphones, and the Mali-G725/625 GPUs for consumer devices. Additionally, Arm announced Compute Subsystems (CSS) for Client to provide foundational computing elements for AI smartphone and PC SoCs, and it introduced KleidiAI, a set of compute kernels for developers of AI frameworks. The Armv9-A architecture also added support for the Scalable Matrix Extension to accelerate AI workloads.

TSMC said its 2nm process is on target to begin mass production in 2025. Meanwhile, Samsung is expected to release its 1nm plan next month, targeting mass production for 2026 — a year ahead of schedule, reports Business Korea.

CHIPs for America and NATCAST released a 2024 roadmap for the U.S. National Semiconductor Technology Center (NSTC), identifying priorities for facilities, research, workforce development, and membership.

China is investing CNY 344 billion (~$47.5 billion) into the third phase of its National Integrated Circuit Industry Investment Fund, also known as the Big Fund, to support its semiconductor sector and supply chain, according to numerous reports.

Malaysia plans to invest $5.3 billion in seed capital and support for semiconductor manufacturing in an effort to attract more than $100 billion in foreign investments, reports Reuters. Prime Minister Anwar Ibrahim announced the effort to create at least 10 companies focused on IC design, advanced packaging, and equipment manufacturing.

imec demonstrated a die-to-wafer hybrid bonding flow for Cu-Cu and SiCN-SiCN at pitches down to 2µm at the IEEE’s ECTC conference. This breakthrough could enable die and wafer-level optical interconnects.

The chip industry is racing to develop glass for advanced packaging, setting the stage for one of the biggest shifts in chip materials in decades — and one that will introduce a broad new set of challenges that will take years to fully resolve.

Quick links to more news:

In-Depth
Global
Product News
Markets and Money
Security
Research and Training
Quantum
Events and Further Reading


In-Depth

Semiconductor Engineering published its Systems & Design newsletter featuring these top stories:


Global

STMicroelectronics is building a fully integrated SiC facility in Catania, Italy.  The high-volume 200mm facility is projected to cost over $5 billion.

Siliconware Precision Industries Co. Ltd.(SPIL) broke ground on an RM 6 billion (~$1.3 billion) advanced packaging and testing facility in Malaysia. Also, Google will invest $2 billion in Malaysia for its first data center, and a Google Cloud hub to meet growing demand for cloud services and AI literacy programs, reports AP.

In an SEC filing, Applied Materials received additional subpoenas from the U.S. Department of Commerce’s (DoC) Bureau of Industry and Security related to shipments of advanced semiconductor equipment to China. This comes on the heels of similar subpoenas issued last year.

A Chinese contractor working for SK hynix was arrested in South Korea and is being charged with funneling more than 3,000 copies of a paper on solving process failure issues to Huawei, reports South Korea’s Union News.

VSORA, CEA-Grenoble, and Valeo were awarded $7 million from the French government to build low-latency, low-power AI inference co-processors for autonomous driving and other applications.

In the U.S., the National Highway Traffic Safety Administration (NHTSA) is investigating unexpected driving behaviors of vehicles equipped with Waymo‘s 5th Generation automated driving system (ADS), with details of nine new incidents on top of the first 22.


Product News

ASE introduced powerSIP, a power delivery platform designed to reduce signal and transmission loss while addressing current density challenges.

Infineon announced a roadmap for energy-efficient power supply units based on Si, SiC, and GaN to address the energy needs of AI data centers, featuring new 8 kW and 12 kW PSUs, in addition to the 3 kW and 3.3 kW units available today. The company also released its CoolSiC MOSFET 400 V family, specially developed for use in the AC/DC stage of AI servers, complementing the PSU roadmap.

Fig. 1: Infineon’s 8kW PSU. Source: Infineon

Infineon also introduced two new generations of high voltage (HV) and medium voltage (MV) CoolGaN TM devices, enabling customers to use GaN in voltage classes from 40 V to 700 V. The devices are built using Infineon’s 8-inch foundry processes.

Ansys launched Ansys Access on Microsoft Azure to provide pre-configured simulation products optimized for HPC on Azure infrastructure.

Foxconn Industrial Internet used Keysight Technology’s Open RAN Studio solution to certify an outdoor Open Radio Unit (O-RU).

Andes Technology announced an SoC and development board for the development and porting of large RISC-V applications.

MediaTek uncorked a pair of mobile chipsets built on a 4nm process that use an octa-core CPU consisting of 4X Arm Cortex-A78 cores operating at up to 2.5GHz paired with 4X Arm Cortex-A55 cores.

The NVIDIA H200 Blackwell platform is expected to begin shipping in Q3 of 2024 and will be available to data centers by Q4, according to TrendForce.

A room-temperature direct fusion hybrid bonding system from Be Semiconductor has shipped to the NHanced advanced packaging facility in North Carolina. The new system offers faster throughput for copper interconnects with submicron pad sizes, greater accuracy and reduced warpage.


Markets and Money

Frore Systems raised $80 million for its solid-state active cooling module, which removes heat from the top of a chip without fans. The device in systems ranging from notebooks and network edge gateways to data centers.

Axus Technology received $12.5 million in capital equity funding to make its chemical mechanical planarization (CMP) equipment for semiconductor wafer polishing, thinning, and cleaning, including of silicon carbide (SiC) wafers.

Elon Musk’s xAI announced a series B funding round of $6 billion.

Micron was ordered to pay $445 million in damages to Netlist for patent infringement of the company’s DDR4 memory module technology between 2021 and 2024.

Global revenue from AI semiconductors is predicted to total $71 billion in 2024, up 33% from 2023, according to Gartner. In 2025, it is expected to jump to $91.9 billion. The value of AI accelerators used in servers is expected to total $21 billion in 2024 and reach $33 billion by 2028.

NAND flash revenue was $14.71 billion in Q1 2024, an increase of 28.1%, according to TrendForce.

The optical transceiver market dipped from $11 billion in 2022 to $10.9 billion in 2023, but it is predicted to reach $22.4 billion by 2029, driven by AI, 800G applications, and the transition to 200G/lane ecosystem technologies, reports Yole.

Yole also found that ultra-wideband technical choices and packaging types used by NXP, Apple, and Qorvo vary considerably, ranging from 7nm to 90nm, with both CMOS and finFET transistors.

The global market share of GenAI-capable smartphones increased to 6% in Q1 2024 from 1.3% in the previous quarter, reports Counterpoint. The premium segment accounted for over 70% of sales with Samsung on top and contributing 58%. Meanwhile, global foldable smartphone shipments were up 49% YoY in Q1 2024, led by Huawei, HONOR, and Motorola.


Security

The National Science Foundation awarded Worcester Polytechnic Institute researcher Shahin Tajik almost $0.6 million to develop new technologies to address hardware security vulnerabilities.

The Hyperform consortium was formed to develop European sovereignty in post-quantum cryptography, funded by the French government and EU credits. Members include IDEMIA Secure Transactions, CEA Leti, and the French cybersecurity agency (ANSSI).

In security research:

  • University of California Davis and University of Arizona researchers proposed a framework leveraging generative pre-trained transformer (GPT) models to automate the obfuscation process.
  • Columbia University and Intel researchers presented a secure digital low dropout regulator that integrates an attack detector and a detection-driven protection scheme to mitigate correlation power analysis.
  • Pohang University of Science and Technology (POSTECH) researchers analyzed threshold switch devices and their performance in hardware security.

The U.S. Defense Advanced Research Projects Agency (DARPA) seeks proposals for its AI Quantified program to develop technology to help deploy generative AI safely and effectively across the Department of Defense (DoD) and society.

Vanderbilt University and Oak Ridge National Laboratory (ORNL) partnered to develop dependable AI for national security applications.

The Cybersecurity and Infrastructure Security Agency (CISA) issued a number of alerts/advisories.


Research and Training

New York continues to amp up their semiconductor offerings. NY CREATES and Raytheon unveiled a semiconductor workforce training program. And Syracuse  University is hosting a free virtual course focused on the semiconductor industry this summer.

In research news:

  • A team of researchers at MIT and other universities found that extreme temperatures up to 500°C did not significantly degrade GaN materials or contacts.
  • University of Cambridge researchers developed adaptive and eco-friendly sensors that can be directly and imperceptibly printed onto biological surfaces, such as a finger or flower petal.
  • Researchers at Rice University and Hanyang University developed an elastic material that moves like skin and can adjust its dielectric frequency to stabilize RF communications and counter disruptive frequency shifts that interfere with electronics when a substrate is twisted or stretched, with potential for stretchable wearable electronic devices.

The National Science Foundation (NSF) awarded $36 million to three projects chosen for their potential to revolutionize computing. The University of Texas at Austin-led project aims to create a next-gen open-source intelligent and adaptive OS. The Harvard University-led project targets sustainable computing. The University of Massachusetts Amherst-led project will develop computational decarbonization.


Quantum

Singapore will invest close to S$300 million (~$222 million) into its National Quantum Strategy to support the development and deployment of quantum technologies, including an initiative to design and build a quantum processor within the country.

Several quantum partnerships were announced:

  • Riverlane and Alice & Bob will integrate Riverlane’s quantum error correction stack within Alice & Bob’s larger quantum computing system based on cat qubit technology.
  • New York University and the University of Copenhagen will collaborate to explore the viability of hybrid superconductor-semiconductor quantum materials for the production of quantum chips and integration with CMOS processes.
  • NXP, eleQtron, and ParityQC showed off a full-stack, ion-trap based quantum computer demonstrator for Germany’s DLR Quantum Computing Initiative.
  • Photonic says it demonstrated distributed entanglement between quantum modules using optically-linked silicon spin qubits with a native telecom networking interface as part of a quantum internet effort with Microsoft.
  • Classiq and HPE say they developed a rapid method for solving large-scale combinatorial optimization problems by combining quantum and classical HPC approaches.

Events and Further Reading

Find upcoming chip industry events here, including:

Event Date Location
Hardwear.io Security Trainings and Conference USA 2024 May 28 – Jun 1 Santa Clara, CA
SWTest Jun 3 – 5 Carlsbad, CA
IITC2024: Interconnect Technology Conference Jun 3 – 6 San Jose, CA
VOICE Developer Conference Jun 3 – 5 La Jolla, CA
CHIPS R&D Standardization Readiness Level Workshop Jun 4 – 5 Online and Boulder, CO
SNUG Europe: Synopsys User Group Jun 10 – 11 Munich
IEEE RAS in Data Centers Summit: Reliability, Availability and Serviceability Jun 11 – 12 Santa Clara, CA
3D & Systems Summit Jun 12 – 14 Dresden, Germany
PCI-SIG Developers Conference Jun 12 – 13 Santa Clara, CA
AI Hardware and Edge AI Summit: Europe Jun 18 – 19 London, UK
DAC 2024 Jun 23 – 27 San Francisco
Find All Upcoming Events Here

Upcoming webinars are here, including integrated SLM analytics solution, prototyping and validation of perception sensor systems, and improving PCB designs for performance and reliability.


Semiconductor Engineering’s latest newsletters:

Automotive, Security and Pervasive Computing
Systems and Design
Low Power-High Performance
Test, Measurement and Analytics
Manufacturing, Packaging and Materials

The post Chip Industry Week In Review appeared first on Semiconductor Engineering.

  • ✇IEEE Spectrum
  • 1-bit LLMs Could Solve AI’s Energy DemandsMatthew Hutson
    Large language models, the AI systems that power chatbots like ChatGPT, are getting better and better—but they’re also getting bigger and bigger, demanding more energy and computational power. For LLMs that are cheap, fast, and environmentally friendly, they’ll need to shrink, ideally small enough to run directly on devices like cellphones. Researchers are finding ways to do just that by drastically rounding off the many high-precision numbers that store their memories to equal just 1 or -1.LLMs
     

1-bit LLMs Could Solve AI’s Energy Demands

30. Květen 2024 v 20:28


Large language models, the AI systems that power chatbots like ChatGPT, are getting better and better—but they’re also getting bigger and bigger, demanding more energy and computational power. For LLMs that are cheap, fast, and environmentally friendly, they’ll need to shrink, ideally small enough to run directly on devices like cellphones. Researchers are finding ways to do just that by drastically rounding off the many high-precision numbers that store their memories to equal just 1 or -1.

LLMs, like all neural networks, are trained by altering the strengths of connections between their artificial neurons. These strengths are stored as mathematical parameters. Researchers have long compressed networks by reducing the precision of these parameters—a process called quantization—so that instead of taking up 16 bits each, they might take up 8 or 4. Now researchers are pushing the envelope to a single bit.

How to Make a 1-bit LLM

There are two general approaches. One approach, called post-training quantization (PTQ) is to quantize the parameters of a full-precision network. The other approach, quantization-aware training (QAT), is to train a network from scratch to have low-precision parameters. So far, PTQ has been more popular with researchers.

In February, a team including Haotong Qin at ETH Zurich, Xianglong Liu at Beihang University, and Wei Huang at the University of Hong Kong introduced a PTQ method called BiLLM. It approximates most parameters in a network using 1 bit, but represents a few salient weights—those most influential to performance—using 2 bits. In one test, the team binarized a version of Meta’s LLaMa LLM that has 13 billion parameters.

“One-bit LLMs open new doors for designing custom hardware and systems specifically optimized for 1-bit LLMs.” —Furu Wei, Microsoft Research Asia

To score performance, the researchers used a metric called perplexity, which is basically a measure of how surprised the trained model was by each ensuing piece of text. For one dataset, the original model had a perplexity of around 5, and the BiLLM version scored around 15, much better than the closest binarization competitor, which scored around 37 (for perplexity, lower numbers are better). That said, the BiLLM model required about a tenth of the memory capacity as the original.

PTQ has several advantages over QAT, says Wanxiang Che, a computer scientist at Harbin Institute of Technology, in China. It doesn’t require collecting training data, it doesn’t require training a model from scratch, and the training process is more stable. QAT, on the other hand, has the potential to make models more accurate, since quantization is built into the model from the beginning.

1-bit LLMs Find Success Against Their Larger Cousins

Last year, a team led by Furu Wei and Shuming Ma, at Microsoft Research Asia, in Beijing, created BitNet, the first 1-bit QAT method for LLMs. After fiddling with the rate at which the network adjusts its parameters, in order to stabilize training, they created LLMs that performed better than those created using PTQ methods. They were still not as good as full-precision networks, but roughly 10 times as energy efficient.

In February, Wei’s team announced BitNet 1.58b, in which parameters can equal -1, 0, or 1, which means they take up roughly 1.58 bits of memory per parameter. A BitNet model with 3 billion parameters performed just as well on various language tasks as a full-precision LLaMA model with the same number of parameters and amount of training, but it was 2.71 times as fast, used 72 percent less GPU memory, and used 94 percent less GPU energy. Wei called this an “aha moment.” Further, the researchers found that as they trained larger models, efficiency advantages improved.

A BitNet model with 3 billion parameters performed just as well on various language tasks as a full-precision LLaMA model.

This year, a team led by Che, of Harbin Institute of Technology, released a preprint on another LLM binarization method, called OneBit. OneBit combines elements of both PTQ and QAT. It uses a full-precision pretrained LLM to generate data for training a quantized version. The team’s 13-billion-parameter model achieved a perplexity score of around 9 on one dataset, versus 5 for a LLaMA model with 13 billion parameters. Meanwhile, OneBit occupied only 10 percent as much memory. On customized chips, it could presumably run much faster.

Wei, of Microsoft, says quantized models have multiple advantages. They can fit on smaller chips, they require less data transfer between memory and processors, and they allow for faster processing. Current hardware can’t take full advantage of these models, though. LLMs often run on GPUs like those made by Nvidia, which represent weights using higher precision and spend most of their energy multiplying them. New hardware could natively represent each parameter as a -1 or 1 (or 0), and then simply add and subtract values and avoid multiplication. “One-bit LLMs open new doors for designing custom hardware and systems specifically optimized for 1-bit LLMs,” Wei says.

“They should grow up together,” Huang, of the University of Hong Kong, says of 1-bit models and processors. “But it’s a long way to develop new hardware.”

  • ✇Ars Technica - All content
  • Tech giants form AI group to counter Nvidia with new interconnect standardBenj Edwards
    Enlarge (credit: Getty Images) On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today'
     

Tech giants form AI group to counter Nvidia with new interconnect standard

30. Květen 2024 v 22:42
Abstract image of data center with flowchart.

Enlarge (credit: Getty Images)

On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.

The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.

This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.

Read 5 remaining paragraphs | Comments

  • ✇GAME PRESS
  • TEST: KIWI Design RGB Nabíjecí stanice a SPC Battery Strap pro Meta Quest 3Jiří Morávek
    Pokud patříte mezi hrdé majitele jednoho z VR headsetů, tak jste již pravděpodobně řešili problémy jak svůj přístroj skladovat, jak jej nabíjet a v neposlední řadě také to, jak jej co nejlépe používat. Společnost Meta v tomto směru nepřichází s řešeními, které by byly hodny ovací. Nelze se tak divit, že po této díře na trhu skáče řada dalších společností. Některým se to daří méně, některým poněkud více. A některým se to daří dokonce takovým způsobem, že jejich produkty jsou následně distribuován
     

TEST: KIWI Design RGB Nabíjecí stanice a SPC Battery Strap pro Meta Quest 3

29. Květen 2024 v 19:07

Pokud patříte mezi hrdé majitele jednoho z VR headsetů, tak jste již pravděpodobně řešili problémy jak svůj přístroj skladovat, jak jej nabíjet a v neposlední řadě také to, jak jej co nejlépe používat. Společnost Meta v tomto směru nepřichází s řešeními, které by byly hodny ovací. Nelze se tak divit, že po této díře na trhu skáče řada dalších společností. Některým se to daří méně, některým poněkud více. A některým se to daří dokonce takovým způsobem, že jejich produkty jsou následně distribuovány s oficiální nálepkou Made for Meta. A právě mezi ně patří KIWI Design.

Zástupci této společnosti nám poslali výběr ze své řady produktů, které si postupně všechny otestujeme. Dnes začínáme dvojící produktů, u kterých lze hned na první pohled vidět, že při jejich navrhování se myslelo i na ty nejmenší detaily a potřeby uživatelů. To ovšem neznamená, že jsem během testování nepřišel na nějaké detaily, které mi příliš radost neudělali. Tak pojďme na to!

Jako první si otestujeme SPC Battery Strap, následně RGB nabíjecí stanici a nakonec se podíváme na to, jak jdou spolu tyto dva produkty dohromady a zda si v něčem neodporují.

SPC Battery Head Strap pro Meta Quest

Není žádným tajemstvím, že náhlavní popruh, který je defaultně dodáván společně s Questem je velmi obyčejný a kvalitativně také příliš neexceluje. Proto je také často Strap tím prvním vylepšením, po kterém se nový majitelé ohlížejí.

Balení obsahuje Strap s baterií, polštářek pro opření hlavy s pásem přes hlavu a sponku, která slouží k bezpečnému vedení napájecího kabelu. Než se pustím do popisu samotného Strapu, tak musím zmínit styl zabalení obou produktů. KIWI Design nepatří mezi laciné značky a slovu Design v jejich názvu styl balení určitě ostudu nedělá. Po otevření pevné krabice vás přivítá logo společnosti a také její motto: Make Things Better.

Nečekejte žádnou stavebnici. Strap se skládá ze dvou základních části, jejichž spojení zabere sotva pár vteřin. Následně stačí jen umístit sponku, zacvaknout do ní napájecí kabel a popruh je připraven na připojení ke Questu. Zprvu jsem si nebyl příliš jistý kvalitou zpracování, ale po několika jemných „zaždímání“ mi bylo jasné, že o nějaké praskliny nebo jiné poškození nemusím mít strach.

Dalším faktorem, který mi při čekání na doručení balíků ležel v hlavě, byla následná váha celé soupravy. Strap má totiž v sobě zabudovanou velkou baterii a já měl tak jemné obavy o mé krční svalstvo. Tento problém ale řeší vrchní popruh, který ubírá značnou část celkové váhy a díky jeho široké a měkké konstrukci o něm během hraní ani nevíte. Baterie si zabírá celou zadní stranu a v jejím středu najdeme aretační kolečko, kterým lze měnit velikost Strapu podle velikosti hlavy. Nesmí samozřejmě chybět USB-C konektor pro napájení a 4 indikační diody stavu baterie. Z její levé strany vede skrze již zmiňovanou sponku kabel a jak už asi tušíte, tak ani tento kabel není tak úplně obyčejný.

Jeho kouzlo spočívá ve dvojité zalomené koncovce, SPC, ke které se dostanee ještě později. Jeden vývod je klasické USB-C, pro přímé napájení Questu, druhý je pak na spodní straně a má podobu magnetické 3-pin koncovky, která je ideální spojkou s vertikální napájecí stanicí, kterou si rozebereme později. Zde bych ocenil, kdyby šlo kabel linoucí se z baterie zasunout o několik centimetrů zpět k baterii. V tomto stavu totiž nelze měnit jeho délku a uživatelům s menší hlavou tak kabel tvoří nepřirozenou vlnu na levé straně headsetu. Jedná se ale jen čistě o estetický problém. Obecně je konstrukce vcelku standardní a v této podoby se drží i další výrobci. To pravé kouzlo Battery Strapu od KIWI Design pro mě spočívá v baterii a jejím výkonu.

Baterie má kapacitu 6400mAh a výrobce udává, že by měla prodloužit dobu užívání o 2 až 4 hodiny. Toto tvrzení jsem samozřejmě musel ověřit. Šel jsem na to po svém a v Questu jsem si zapnul poslední zápas českých hokejistů v základní skupině. Sledování probíhalo skrze originální prohlížeč, přes WiFi a při zapnuté smíšené realitě, která je na výdrž baterie náročnější. Přestávky jsem navíc strávil hraním Eleven Table Tennis a Beat Saberu. Výsledek? Kapacitu baterie jsem zkontroloval hned po skončení zápasu a stav baterie byl 100%, navíc baterie indikovala, že stále probíhá dobíjení z externí baterie Strapu. Bylo mi jasné, že se headset za dobu testování určitě nevybije, ale překvapil mě fakt, že se vestavěná baterie stíhala po celou dobu plně dobíjet. Palec hore!

SPC Battery Head Strap pro Meta Quest – 9/10

RGB Vertical Charging Stand

Druhou a neméně důležitou součástí komba je vertikální stojánek s RGB podsvícením. Ten vám dorazí ve stejně stylovém balení jako Strap. Zde se ovšem budete muset trochu snažit. Stojánek je totiž rozložen na několik částí. Jeho sestavení je otázkou několika málo vteřin. Základna je solidní pevná a díky své vyšší váze se nemusíte bát, že byste jej nějakým šťouchnutím převrhli. Na základnu pak navazuje hlavní kostra, což je jediný kousek této skládačky, se kterou jsem měl z hlediska provedení problém. Po zacvaknutí na základnu se v mém případě jemně vrtěla a neseděla stoprocentně. Na tuto kostru se následně nacvakne odkládací plocha s RGB rámováním a USB-C vývodem pro napájecí kabel. Na její spodní straně se nachází nejen dva praktické držáky na ovladače, ale také chytré ovládání podsvícení. Přepínat můžete mezi klasickými barvami i oblíbenou duhovou kombinací. V případě, že se vám barvičky nelíbí nebo vám podsvícení nevyhovuje, lze RGB vypnout.

Největší problém jsem měl s rozměry odkládací plochy. Pokud máte menší hlavu, tak tento odstavec klidně přeskočte a radujte se. Po nastavení Strapu na velikost mi příjemnou se mi několikrát stalo, že mi headset sjel ze stojánku dozadu a zůstal viset v nepřirozené pozici. Zadní část odkládací plochy je totiž poměrně krátká a není zde žádný výčnělek, který by sesunutí mohl zabránit. Pevně věřím, že nebudu jediný s tímto problémem. Řešení se mi nabízelo několik: a) stáhnout pásek headsetu na velikost, která ze stojánku nespadne nebo za b) při každém odkládání brýlí být extrémně opatrný. Ani jedna z těchto možností není příliš praktická a uživatelsky přívětivá. Momentálně tyto trable řeším možností A a přemýšlím nad nějakým vylepšením pomocí 3D tiskárny. Pevně ale věřím, že se k této části sestavy KIWI Design nějak postaví a dojde k potřebným vylepšením.

Tou nejzajímavější částí stojánku je měkký a ohebný kabel tyčící se přesně do prostoru, kde má odložený headset konektor pro nabíjení. A zde přichází to pravé kouzlo v kombinaci Strapu a tohoto stojánku. Mám na mysli onu zkratku SPC – Single Point Charging. Ve chvíli, kdy odložíte headset s SPC Battery popruhem na KIWI Design stojánek, začne se nabíjet jak VR headset, tak i přídavná baterie na Strapu. Vše je propojeno skrze magnetické spojky na kabelu Strapu a zmiňovaném měkkém kabelu vedoucího ze stojánku, který si spolehlivě poradí s různými úhly. Nebýt mého problému s velikostí problému, jednalo by se o absolutně perfektní a přívětivé řešení. I tak bych ale měl řadu nápadů na zlepšení stojánku. Kromě delší plochy pro odpočinek zadní strany Strapu, bych rozšířil otvory pro ovladače a plochu, na které leží hlavní část headsetu, bych vyměnil za něco s protiskluzovou ochrannou.

RGB Vertical Charging Stand – 7/10

 

Pokud si vyberete produkty KIWI Design s certifikátem Made for Meta do svého VR ekosystému, tak určitě neprohloupíte. Poměr ceny a výkonu je zde výborný a kromě praktické souhry od těchto produktů můžete v neposlední řadě čekat to, že vám ozdobí váš herní kutloch, stůl či pracovní plochu. Sice zde lze objevit několik nedodělků, ale i přes své výtky musím uznat, že plusy převažují mínusy. 

KIWI Design nám na test poskytnulo i další své produkty. Na test dvou velmi zajímavých 5 metrů dlouhých Link kabelů, ochrany čoček a gripů pro Quest 3 ovladače se podíváme zase příště. 

Článek TEST: KIWI Design RGB Nabíjecí stanice a SPC Battery Strap pro Meta Quest 3 se nejdříve objevil na GAME PRESS.

  • ✇GAME PRESS
  • MudRunner VR vychází již příští týden!Jiří Morávek
    Původní MudRunner si srdce hráčů a nadšenců do hardcore simulátorů získal již v roce 2017. Od té doby jsme se dočkali pokračování v podobě masivního SnowRunner a čerstvé novinky Expeditions. Nově se tato série podívá i do světa virtuální reality. Hardcore simulátor převážení nadměrných nákladů v extrémních podmínkách si budou moci užít majitelé headsetů Meta Quest již velmi brzo, přesněji 30. května. Hráče čeká znatelně větší makačka, než tomu je u klasických verzí. Zapomenout můžete například n
     

MudRunner VR vychází již příští týden!

20. Květen 2024 v 21:46

Původní MudRunner si srdce hráčů a nadšenců do hardcore simulátorů získal již v roce 2017. Od té doby jsme se dočkali pokračování v podobě masivního SnowRunner a čerstvé novinky Expeditions. Nově se tato série podívá i do světa virtuální reality.

Hardcore simulátor převážení nadměrných nákladů v extrémních podmínkách si budou moci užít majitelé headsetů Meta Quest již velmi brzo, přesněji 30. května. Hráče čeká znatelně větší makačka, než tomu je u klasických verzí. Zapomenout můžete například na automatické chycení navijáku. Pokud budete potřebovat použít tuto vychytávku, tak prostě budete muset vystoupit, chytit hák a dojít k použitelnému stromu a vrátit se zpět do kabiny. V tomto duchu se ponese celá hra a ačkoliv to může pro někoho znít, jako neskutečná otrava, tak mnoha hráčům to zní jako pohádka, a to včetně mě.

Vývojáři již dříve slíbili propracované modely, které budou mít detailně vymodelované interiéry. Hra by měla při vydání obsahovat 8 vozidel různých tříd a zaměření. K dispozici bude volná jízda a příběhová kampaň, která poskytne řadu výzev. Očekávat můžeme klasické MudRunner zákeřnosti jako nedostatek paliva nebo nadměrné poškození vozidla.

Jeden z hlavních otazníků, který mi jako fanouškovi SnowRunneru vrtá hlavou je to, jak se vývojáři poperou s pohledem z první osoby ve chvíli, kdy se budeme brodit hlubokou řekou. Ve standardních hrách se totiž v tomto případě hra přepne do jiné kamery.

MudRunner VR vychází 30. května pro headsety Meta Quest.

Článek MudRunner VR vychází již příští týden! se nejdříve objevil na GAME PRESS.

  • ✇- SamMobile
  • More people get WhatsApp’s Chat Lock feature on linked devicesAbid Iqbal Shaik
    In May 2023, Meta introduced the Chat Lock feature in WhatsApp, which allows you to lock and hide chats behind a passcode or biometric authentication. While this is a very useful feature, it only works on primary devices. Fortunately, last month we got to know that the company had started testing Chat Lock on linked devices by rolling it out to a limited number of people. Well, Meta is now making the feature available to more people. According to a new report from WABetaInfo, the latest beta ver
     

More people get WhatsApp’s Chat Lock feature on linked devices

19. Květen 2024 v 22:54

In May 2023, Meta introduced the Chat Lock feature in WhatsApp, which allows you to lock and hide chats behind a passcode or biometric authentication. While this is a very useful feature, it only works on primary devices. Fortunately, last month we got to know that the company had started testing Chat Lock on linked devices by rolling it out to a limited number of people. Well, Meta is now making the feature available to more people.

According to a new report from WABetaInfo, the latest beta version of WhatsApp for Android (version 2.24.11.9) offers the Chat Lock feature on linked devices as well. More importantly, the feature seems to be available for everyone who is using this version of the app rather than just to a select few users.

The story continues after the video…

At the moment, there’s no information about when Meta will make this feature available in the stable version of WhatsApp. However, we expect the company to roll it out to the public after thoroughly testing it in the beta version of the app, which could take at least a couple of weeks.

The post More people get WhatsApp’s Chat Lock feature on linked devices appeared first on SamMobile.

  • ✇Techdirt
  • Ctrl-Alt-Speech: Do You Really Want The Government In Your DMs?Leigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Commission opens formal proceedings against Meta under the Digital Services Act related to the
     

Ctrl-Alt-Speech: Do You Really Want The Government In Your DMs?

18. Květen 2024 v 00:15

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

  • ✇GAME PRESS
  • VR aplikace za $1000 vás naučí svařovatJiří Morávek
    Několik posledních dní se na oficiálním Meta Horizon Store objevují aplikace a hry, které byly dostupné pouze skrze App Lab. Jednou z takových je i WeldVR, které uživatele nepřekvapí jen svým zaměřením, ale především svou cenovkou, která se pohybuje okolo 23000 Kč. WeldVR je tak nejdražší aplikací, kterou na oficiálním obchodě můžete najít. Do této chvíle byla nejdražší apkou Fetal Heart VR, která sloužila pro trénink mediků. $1000 je také nové maximum, které lze v obchodě objevit. Dražší hry M
     

VR aplikace za $1000 vás naučí svařovat

17. Květen 2024 v 21:29

Několik posledních dní se na oficiálním Meta Horizon Store objevují aplikace a hry, které byly dostupné pouze skrze App Lab. Jednou z takových je i WeldVR, které uživatele nepřekvapí jen svým zaměřením, ale především svou cenovkou, která se pohybuje okolo 23000 Kč.

WeldVR je tak nejdražší aplikací, kterou na oficiálním obchodě můžete najít. Do této chvíle byla nejdražší apkou Fetal Heart VR, která sloužila pro trénink mediků.

$1000 je také nové maximum, které lze v obchodě objevit. Dražší hry Meta prostě nedovoluje. Jak ale možná tušíte, tak zcela jiné podmínky platí u společnosti Apple. Ta ve svém App Store dovoluje hry a aplikace s cenovkou až $10 000.

Co tedy WeldVR nabízí? Jedná se o program, jenž slouží k výuce základů svařování. Je cílena na uživatele, jenž si nemohou dovolit pořízení drahých svařovacích souprav, na kterých by se toto „umění“ naučili. Ačkoliv se tak může cenovka zdát přepálená, mohlo by se jednat o vskutku užitečnou aplikace s vypovídající hodnotou pro koncové uživatele.

K dispozici jsou tréninkové programy pro MIG, TIG a CO2. Trénovat lze v různých pozicích a situacích. Uživatelům jsou následně nabídnuty bohaté statistiky a reporty, díky kterým se pak mohou virtuální svářeči snadněji zlepšovat. Samotná aplikace stojí $1000, ale zájemcům mohou vývojáři ze studia Cythero nabídnout také kompletní balíček, ve kt.erém je kromě WeldVR také Meta Quest headset, pouzdro a další bonusy včetně online poradenství. Takové kombo vyjde na $6000.

Pokud se vám z takové cenovky zatočila hlava, tak nezoufejte. Na Meta Horizon Store naleznete Trial verzi, díky které si můžete svařovat bezplatně celých 20 minut. 

Článek VR aplikace za $1000 vás naučí svařovat se nejdříve objevil na GAME PRESS.

  • ✇Techdirt
  • Ctrl-Alt-Speech: Between A Rock And A Hard PolicyLeigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware) T
     

Ctrl-Alt-Speech: Between A Rock And A Hard Policy

11. Květen 2024 v 00:25

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

  • ✇Boing Boing
  • Instagram's terrifying false positive nearly ruined a tech writer's lifeMark Frauenfelder
    Earlier this week, tech writer and Google Ventures general partner M.G. Siegler received a late-night notification from Meta informing him that his accounts on Instagram, Threads, and Facebook had been suspended for a horrifying alleged violation that he refuses to even name. — Read the rest The post Instagram's terrifying false positive nearly ruined a tech writer's life appeared first on Boing Boing.
     

Instagram's terrifying false positive nearly ruined a tech writer's life

10. Květen 2024 v 18:25
instagram account ban

Earlier this week, tech writer and Google Ventures general partner M.G. Siegler received a late-night notification from Meta informing him that his accounts on Instagram, Threads, and Facebook had been suspended for a horrifying alleged violation that he refuses to even name. — Read the rest

The post Instagram's terrifying false positive nearly ruined a tech writer's life appeared first on Boing Boing.

  • ✇Pocketables
  • Meta AI is a thing now, if you’re interested in a barely-connected AiPaul E King
    Meta has thrown their version of Ai into the fray against ChatGPT and Gemini with a no-account-required Ai that is accessible at meta.ai. Signing in using Facebook allows you to see previous chats and presumably remember other interactions. Meta’s image generation is surprising as it generates images while you’re typing. It generates slightly faster than I can type so there’s something at every word. It also has the ability to create a video of quite a few of the changes it made during
     

Meta AI is a thing now, if you’re interested in a barely-connected Ai

19. Duben 2024 v 21:58

Meta has thrown their version of Ai into the fray against ChatGPT and Gemini with a no-account-required Ai that is accessible at meta.ai. Signing in using Facebook allows you to see previous chats and presumably remember other interactions.

Meta’s image generation is surprising as it generates images while you’re typing. It generates slightly faster than I can type so there’s something at every word.

Meta AI image

It also has the ability to create a video of quite a few of the changes it made during creating your final image. Just from this it missed a chunk, but I’m not sure how much AI bandwidth actually needs to be devoted to my attempts at creating a bar scene with a guinea pig and a cockatiel looking at an HTC EVO 4G and checking social media while a kung fu fight is breaking out behind them.

As LLMs go it’s really fast, which I enjoy after watching Copilot (chatgpt) slowly type out the answers.

I threw some questions at Meta Ai, most were “help me remember this” but it seems like there’s a popular culture filter it’s seeing the world through (in other words, a couple of old novellas I was attempting to figure out the names of it did not have much of a clue on.)

Side note – if you happen to have read a short story about a person being introduced to an Ai that generates historical figures that teach us that our pronunciation of Latin is incorrect and that AI Napoleon must be banned from internet access, drop me a line. I couldn’t get any of the 3 major AIs to look into old Sci-Fi.

The instant image generation makes this amazing, the lack of internet and ability to research sort of puts it into my “check back later” category because I need current, not late 2022. My needs, however are not yours. This does some terrific image generation, text generation, but can’t do my research for me. This may be incorrect, it does appear to have some 2024 info I’m just not seeing much of it in my queries.

Meta AI generated photo
There’s an AI generated stamp in the bottom right that I did not intentionally blur, just something about getting this into WordPress via screenshot = blur. I suspect you’ll be looking for that blur on Facebook soon enough.

The hands are, as AI goes these days, terrible. The above image sort of got it right but most have hallucinating hands, elbows, etc. The dirty picture filtering algorithm is in full effect and somewhat laughable. Try and put a semi-transparent hat on someone doing yoga, it blurs things out because obviously transparent hats on women already doing yoga mean I’m going for nudity. But imaging a transparent hats with a woman under them doing yoga is fine.

Meta AI claims to have video generation via /video… however when I use it it just looks up videos that are on Facebook or Instagram… which it does appear to be connected to. Attempts to search for videos that I produced that are on Instagram or Facebook failed however… not really sure how this video searching is going.

Pretty awesome, needs some work, but all of them do. For now, as much as I dislike it, copilot is what’s working for current news, Meta appears to be seeing information from a couple of months back (at least on my site,) Copilot sees info from us a week ago. That said, Meta does not appear to have hallucinated anything other than hands.

You can try it at Meta.AI

Meta AI is a thing now, if you’re interested in a barely-connected Ai by Paul E King first appeared on Pocketables.

💾

This is from April 19th, and is not the image that I ended up with, but it's an interesting video showcasing what the Meta image generator spewed out as I ty...
  • ✇Techdirt
  • Ctrl-Alt-Speech: Between A Rock And A Hard PolicyLeigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware) T
     

Ctrl-Alt-Speech: Between A Rock And A Hard Policy

11. Květen 2024 v 00:25

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

  • ✇Boing Boing
  • Instagram's terrifying false positive nearly ruined a tech writer's lifeMark Frauenfelder
    Earlier this week, tech writer and Google Ventures general partner M.G. Siegler received a late-night notification from Meta informing him that his accounts on Instagram, Threads, and Facebook had been suspended for a horrifying alleged violation that he refuses to even name. — Read the rest The post Instagram's terrifying false positive nearly ruined a tech writer's life appeared first on Boing Boing.
     

Instagram's terrifying false positive nearly ruined a tech writer's life

10. Květen 2024 v 18:25
instagram account ban

Earlier this week, tech writer and Google Ventures general partner M.G. Siegler received a late-night notification from Meta informing him that his accounts on Instagram, Threads, and Facebook had been suspended for a horrifying alleged violation that he refuses to even name. — Read the rest

The post Instagram's terrifying false positive nearly ruined a tech writer's life appeared first on Boing Boing.

  • ✇Pocketables
  • Meta AI is a thing now, if you’re interested in a barely-connected AiPaul E King
    Meta has thrown their version of Ai into the fray against ChatGPT and Gemini with a no-account-required Ai that is accessible at meta.ai. Signing in using Facebook allows you to see previous chats and presumably remember other interactions. Meta’s image generation is surprising as it generates images while you’re typing. It generates slightly faster than I can type so there’s something at every word. It also has the ability to create a video of quite a few of the changes it made during
     

Meta AI is a thing now, if you’re interested in a barely-connected Ai

19. Duben 2024 v 21:58

Meta has thrown their version of Ai into the fray against ChatGPT and Gemini with a no-account-required Ai that is accessible at meta.ai. Signing in using Facebook allows you to see previous chats and presumably remember other interactions.

Meta’s image generation is surprising as it generates images while you’re typing. It generates slightly faster than I can type so there’s something at every word.

Meta AI image

It also has the ability to create a video of quite a few of the changes it made during creating your final image. Just from this it missed a chunk, but I’m not sure how much AI bandwidth actually needs to be devoted to my attempts at creating a bar scene with a guinea pig and a cockatiel looking at an HTC EVO 4G and checking social media while a kung fu fight is breaking out behind them.

As LLMs go it’s really fast, which I enjoy after watching Copilot (chatgpt) slowly type out the answers.

I threw some questions at Meta Ai, most were “help me remember this” but it seems like there’s a popular culture filter it’s seeing the world through (in other words, a couple of old novellas I was attempting to figure out the names of it did not have much of a clue on.)

Side note – if you happen to have read a short story about a person being introduced to an Ai that generates historical figures that teach us that our pronunciation of Latin is incorrect and that AI Napoleon must be banned from internet access, drop me a line. I couldn’t get any of the 3 major AIs to look into old Sci-Fi.

The instant image generation makes this amazing, the lack of internet and ability to research sort of puts it into my “check back later” category because I need current, not late 2022. My needs, however are not yours. This does some terrific image generation, text generation, but can’t do my research for me. This may be incorrect, it does appear to have some 2024 info I’m just not seeing much of it in my queries.

Meta AI generated photo
There’s an AI generated stamp in the bottom right that I did not intentionally blur, just something about getting this into WordPress via screenshot = blur. I suspect you’ll be looking for that blur on Facebook soon enough.

The hands are, as AI goes these days, terrible. The above image sort of got it right but most have hallucinating hands, elbows, etc. The dirty picture filtering algorithm is in full effect and somewhat laughable. Try and put a semi-transparent hat on someone doing yoga, it blurs things out because obviously transparent hats on women already doing yoga mean I’m going for nudity. But imaging a transparent hats with a woman under them doing yoga is fine.

Meta AI claims to have video generation via /video… however when I use it it just looks up videos that are on Facebook or Instagram… which it does appear to be connected to. Attempts to search for videos that I produced that are on Instagram or Facebook failed however… not really sure how this video searching is going.

Pretty awesome, needs some work, but all of them do. For now, as much as I dislike it, copilot is what’s working for current news, Meta appears to be seeing information from a couple of months back (at least on my site,) Copilot sees info from us a week ago. That said, Meta does not appear to have hallucinated anything other than hands.

You can try it at Meta.AI

Meta AI is a thing now, if you’re interested in a barely-connected Ai by Paul E King first appeared on Pocketables.

💾

This is from April 19th, and is not the image that I ended up with, but it's an interesting video showcasing what the Meta image generator spewed out as I ty...
  • ✇Techdirt
  • Ctrl-Alt-Speech: Between A Rock And A Hard PolicyLeigh Beadon
    Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw. Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed. In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover: Stack Overflow bans users en masse for rebelling against OpenAI partnership (Tom’s Hardware) T
     

Ctrl-Alt-Speech: Between A Rock And A Hard Policy

11. Květen 2024 v 00:25

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

  • ✇Techdirt
  • Link Taxes Backfire: Canadian News Outlets Lose Out, Meta UnscathedMike Masnick
    As California (and possibly Congress) are, again, revisiting instituting link taxes in the US, it’s worth highlighting that our prediction about the Canadian link tax has now been shown to be correct. It didn’t harm Meta one bit to remove news. The entire premise behind these link taxes/bargaining codes is that social media gets “so much free value” from news orgs, that they must pay up. Indeed, a ridiculously bad study that came out last fall, and was widely passed around, that argued that Goog
     

Link Taxes Backfire: Canadian News Outlets Lose Out, Meta Unscathed

9. Květen 2024 v 22:16

As California (and possibly Congress) are, again, revisiting instituting link taxes in the US, it’s worth highlighting that our prediction about the Canadian link tax has now been shown to be correct. It didn’t harm Meta one bit to remove news.

The entire premise behind these link taxes/bargaining codes is that social media gets “so much free value” from news orgs, that they must pay up. Indeed, a ridiculously bad study that came out last fall, and was widely passed around, that argued that Google and Meta had stripped $14 billion worth of value from news orgs and should offer to pay up that amount.

$14 billion. With a “b.”

No one, who understands anything, believes that’s true. Again, social media is not taking value away from news orgs. It’s giving them free distribution and free circulation, things that, historically, cost media organizations a ton of money.

But, now a study, in Canada is proving that social media companies get basically zero value from news links. Meta, somewhat famously, blocked links to news in Canada in response to that country’s link tax. This sent many observers into a tizzy, claiming that it was somehow both unfair for Meta to link to news orgs AND to not link to news orgs.

Yes, media organizations are struggling. Yes, the problems facing the news industry are incredibly important to solve to help protect democracy. Yes, we should be thinking and talking about creative solutions for funding.

But, taxing links to force internet companies to pay media companies is simply a terrible solution.

Thanks to Meta , not giving in to Canada and blocking links to news, we now have some data on what happens when a link tax approach is put in place. Some new research from McGill University and the University of Toronto has found that Meta didn’t lose very much engagement from a lack of news links. But media orgs lost out big time.

Laura Hazard Owen has a good summary at Nieman Lab.

“We expected the disappearance of news on Meta platforms to have caused a major shock to the Canadian information ecosystem,” the paper’s authors — Sara Parker, Saewon Park, Zeynep Pehlivan, Alexei Abrahams, Mika Desblancs, Taylor Owen, Jennie Phillips, and Aengus Bridgman — write. But the shock appears to have been one-sided. While “the ban has significantly impacted Canadian news outlets,” the authors write, “Meta has deprived users of the affordance of news sharing without suffering any loss in engagement of their user base.”

What the researchers found is that users are still using Meta platforms just as much, and still getting news from those platforms. They’re just no longer following links back to the sources. This has done particular harm to smaller local news organizations:

This remarkable stability in Meta platform users’ continued use of the platforms for politics and current affairs anticipates the findings from the detailed investigation into engagement and posting behaviour of Canadians. We find that the ban has significantly impacted Canadian news outlets but had little impact on Canadian user behaviour. Consistent with the ban’s goal, we find a precipitous decline in engagement with Canadian news and consequently the posting of news content by Canadian news outlets. The effect is particularly acute for local news outlets, while some news outlets with national or international scope have been able to make a partial recovery after a few months. Additionally, posting by and engagement with alternative sources of information about Canadian current affairs appears unmoved by the ban. We further find that Groups focused on Canadian politics enjoy the same frequency of posting and diversity of engagement after the ban as before. While link sharing declines, we document a complementary uptick in the sharing of screenshots of Canadian news in a sample of these political Groups, and confirm by reviewing a number of such posts where users deliberately circumvented the link-sharing ban by posting screenshots. Although the screenshots do not compensate for the total loss of link sharing, the screenshots themselves garner the same total amount of engagement as news links previously had.

I feel like I need to keep pointing this out, but: when you tax something, you get less of it. Canada has decided to tax news links, so they get fewer news links. But users still want to talk about news, so they’re replacing the links with screenshots and discussions. And it’s pretty impressive how quickly users switched over:

Image

Meaning the only one losing out here are the news publishers themselves who claimed to have wanted this law so badly.

The impact on Canadian news orgs appears to be quite dramatic:

Image

But the activity on Meta platform groups dedicated to news doesn’t seem to have changed that much:

Image

If “news links” were so valuable to Meta, then, um, wouldn’t that have declined once Meta blocked links?

One somewhat incredible finding in the paper is that “misinformation” links also declined after Meta banned news links:

Surprisingly, the number of misinformation links in political and local community Groups decreased after the ban.

Political Groups:

  • Prior to the ban: 2.8% of links (5612 out of 198,587 links) were misinformation links
  • After the ban: 1.4% of links (5306 out of 379,202 links) were misinformation links

Though the paper admits that this could just be a function of users recognizing they can’t share links.

This is still quite early research, but it is notable, especially given that the US continues to push for this kind of law as well. Maybe, just maybe, we should take a step back and recognize that taxing links is not helpful for news orgs and misunderstands the overall issue.

It’s becoming increasingly clear that the approach taken by Canada and other countries to force platforms like Meta to pay for news links is misguided and counterproductive. These laws are reducing the reach and engagement of news organizations while doing little to address the underlying challenges facing the industry. Instead of helping news organizations, these laws are having the opposite effect. Policymakers need to take a more nuanced and evidence-based approach that recognizes the complex dynamics of the online news ecosystem.

  • ✇Android Authority
  • How to run Meta’s Llama 3 on your PCGary Sims
    Meta, the company formerly known as Facebook, has recently unveiled Llama 3, the latest iteration of its large language model. This advanced model is available in two versions: an eight billion (8B) parameter version and a 70 billion (70B) parameter version. In this article, we will explore how to run the 8B parameter version of Llama 3 locally, a more feasible option for standard desktops or laptops that may struggle to run the larger 70B version. Llama 3’s performance overview Llama 3 is a
     

How to run Meta’s Llama 3 on your PC

Od: Gary Sims
3. Květen 2024 v 12:33

Meta, the company formerly known as Facebook, has recently unveiled Llama 3, the latest iteration of its large language model. This advanced model is available in two versions: an eight billion (8B) parameter version and a 70 billion (70B) parameter version. In this article, we will explore how to run the 8B parameter version of Llama 3 locally, a more feasible option for standard desktops or laptops that may struggle to run the larger 70B version.

Llama 3’s performance overview

Llama 3 is an impressive large language model. The 8B parameter version, trained using 1.3 million hours of GPU time, outperforms its predecessor, Llama 2, in several ways. For instance, it is 34% better than the 7 billion parameter version of Llama 2 and 14% better than the 13 billion parameter version. Remarkably, the 8B parameter version of Llama 3 even surpasses the performance of the 13 billion parameter version of Llama 2. It only falls short by 8% when compared to the 70B parameter version of Llama 2, making it an impressive model for its size.

  • ✇Android Authority
  • Can you turn off Meta AI on Facebook, Instagram, and WhatsApp?Haroun Adamu
    If you’ve recently found yourself staring at unfamiliar buttons and prompts on your favorite Meta apps, you’re not alone. The recent rollout marks the debut of Meta AI’s latest large language model, Llama 3, promising more personalized interactions but also potentially more intrusive AI suggestions. Still, this is the closest it has ever been to ChatGPT. But what if you’re not ready to embrace the AI revolution? What if you find the integration more cumbersome than helpful? Maybe you hate that
     

Can you turn off Meta AI on Facebook, Instagram, and WhatsApp?

30. Duben 2024 v 14:39

If you’ve recently found yourself staring at unfamiliar buttons and prompts on your favorite Meta apps, you’re not alone. The recent rollout marks the debut of Meta AI’s latest large language model, Llama 3, promising more personalized interactions but also potentially more intrusive AI suggestions. Still, this is the closest it has ever been to ChatGPT.

But what if you’re not ready to embrace the AI revolution? What if you find the integration more cumbersome than helpful? Maybe you hate that the answers are not completely reliable. But is there any way to turn off the new Meta AI integration and regain some semblance of your pre-AI browsing experience?

  • ✇Pocketables
  • Meta AI is a thing now, if you’re interested in a barely-connected AiPaul E King
    Meta has thrown their version of Ai into the fray against ChatGPT and Gemini with a no-account-required Ai that is accessible at meta.ai. Signing in using Facebook allows you to see previous chats and presumably remember other interactions. Meta’s image generation is surprising as it generates images while you’re typing. It generates slightly faster than I can type so there’s something at every word. It also has the ability to create a video of quite a few of the changes it made during
     

Meta AI is a thing now, if you’re interested in a barely-connected Ai

19. Duben 2024 v 21:58

Meta has thrown their version of Ai into the fray against ChatGPT and Gemini with a no-account-required Ai that is accessible at meta.ai. Signing in using Facebook allows you to see previous chats and presumably remember other interactions.

Meta’s image generation is surprising as it generates images while you’re typing. It generates slightly faster than I can type so there’s something at every word.

Meta AI image

It also has the ability to create a video of quite a few of the changes it made during creating your final image. Just from this it missed a chunk, but I’m not sure how much AI bandwidth actually needs to be devoted to my attempts at creating a bar scene with a guinea pig and a cockatiel looking at an HTC EVO 4G and checking social media while a kung fu fight is breaking out behind them.

As LLMs go it’s really fast, which I enjoy after watching Copilot (chatgpt) slowly type out the answers.

I threw some questions at Meta Ai, most were “help me remember this” but it seems like there’s a popular culture filter it’s seeing the world through (in other words, a couple of old novellas I was attempting to figure out the names of it did not have much of a clue on.)

Side note – if you happen to have read a short story about a person being introduced to an Ai that generates historical figures that teach us that our pronunciation of Latin is incorrect and that AI Napoleon must be banned from internet access, drop me a line. I couldn’t get any of the 3 major AIs to look into old Sci-Fi.

The instant image generation makes this amazing, the lack of internet and ability to research sort of puts it into my “check back later” category because I need current, not late 2022. My needs, however are not yours. This does some terrific image generation, text generation, but can’t do my research for me. This may be incorrect, it does appear to have some 2024 info I’m just not seeing much of it in my queries.

Meta AI generated photo
There’s an AI generated stamp in the bottom right that I did not intentionally blur, just something about getting this into WordPress via screenshot = blur. I suspect you’ll be looking for that blur on Facebook soon enough.

The hands are, as AI goes these days, terrible. The above image sort of got it right but most have hallucinating hands, elbows, etc. The dirty picture filtering algorithm is in full effect and somewhat laughable. Try and put a semi-transparent hat on someone doing yoga, it blurs things out because obviously transparent hats on women already doing yoga mean I’m going for nudity. But imaging a transparent hats with a woman under them doing yoga is fine.

Meta AI claims to have video generation via /video… however when I use it it just looks up videos that are on Facebook or Instagram… which it does appear to be connected to. Attempts to search for videos that I produced that are on Instagram or Facebook failed however… not really sure how this video searching is going.

Pretty awesome, needs some work, but all of them do. For now, as much as I dislike it, copilot is what’s working for current news, Meta appears to be seeing information from a couple of months back (at least on my site,) Copilot sees info from us a week ago. That said, Meta does not appear to have hallucinated anything other than hands.

You can try it at Meta.AI

Meta AI is a thing now, if you’re interested in a barely-connected Ai by Paul E King first appeared on Pocketables.

💾

This is from April 19th, and is not the image that I ended up with, but it's an interesting video showcasing what the Meta image generator spewed out as I ty...
  • ✇Techdirt
  • Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?Mike Masnick
    There’s a fascinating new lawsuit against Meta that includes a surprisingly novel interpretation of Section 230. If the court buys it, this interpretation could make the open web a lot more open, while chipping away at the centralized control of the biggest tech companies. And, yes, that could mean that the law (Section 230) that is wrongly called “a gift to big tech” might be a tool that undermines the dominance of some of those companies. But the lawsuit could be tripped up for any number of r
     

Was There A Trojan Horse Hidden In Section 230 All Along That Could Enable Adversarial Interoperability?

2. Květen 2024 v 18:23

There’s a fascinating new lawsuit against Meta that includes a surprisingly novel interpretation of Section 230. If the court buys it, this interpretation could make the open web a lot more open, while chipping away at the centralized control of the biggest tech companies. And, yes, that could mean that the law (Section 230) that is wrongly called “a gift to big tech” might be a tool that undermines the dominance of some of those companies. But the lawsuit could be tripped up for any number of reasons, including a potentially consequential typo in the law that has been ignored for years.

Buckle in, this is a bit of a wild ride.

You would think with how much attention has been paid to Section 230 over the last few years (there’s an entire excellent book about it!), and how short the law is, that there would be little happening with the existing law that would take me by surprise. But the new Zuckerman v. Meta case filed on behalf of Ethan Zuckerman by the Knight First Amendment Institute has got my attention.

It’s presenting a fairly novel argument about a part of Section 230 that almost never comes up in lawsuits, but could create an interesting opportunity to enable all kinds of adversarial interoperability and middleware to do interesting (and hopefully useful) things that the big platforms have been using legal threats to shut down.

If the argument works, it may reveal a surprising and fascinating trojan horse for a more open internet, hidden in Section 230 for the past 28 years without anyone noticing.

Of course, it could also have much wider ramifications that a bunch of folks need to start thinking through. This is the kind of thing that happens when someone discovers something new in a law that no one really noticed before.

But there’s also a very good chance this lawsuit flops for a variety of other reasons without ever really exploring the nature of this possible trojan horse. There are a wide variety of possible outcomes here.

But first, some background.

For years, we’ve talked about the importance of tools and systems that give end users more control over their own experiences online, rather than leaving it entirely up to the centralized website owners. This has come up in a variety of different contexts in different ways, from “Protocols, not Platforms” to “adversarial interoperability,” to “magic APIs” to “middleware.” These are not all exactly the same thing, but they’re all directionally strongly related, and conceivably could work well together in interesting ways.

But there are always questions about how to get there, and what might stand in the way. One of the biggest things standing in the way over the last decade or so has been interpretations of various laws that effectively allow social media companies to threaten and/or bring lawsuits against companies trying to provide these kinds of additional services. This can take the form of a DMCA 1201 claim for “circumventing” a technological block. Or, more commonly, it has taken the form of a civil (Computer Fraud & Abuse Act) CFAA claim.

The most representative example of where this goes wrong is when Facebook sued Power Ventures years ago. Power was trying to build a unified dashboard across multiple social media properties. Users could provide Power with their own logins to social media sites. This would allow Power to log in to retrieve and post data, so that someone could interact with their Facebook community without having to personally go into Facebook.

This was a potentially powerful tool in limiting Facebook’s ability to become a walled-off garden with too much power. And Facebook realized that too. That’s why it sued Power, claiming that it violated the CFAA’s prohibition on “unauthorized access.”

The CFAA was designed (poorly and vaguely) as an “anti-hacking” law. And you can see where “unauthorized access” could happen as a result of hacking. But Facebook (and others) have claimed that “unauthorized access” can also be “because we don’t want you to do that with your own login.”

And the courts have agreed to Facebook’s interpretation, with a few limitations (that don’t make that big of a difference).

I still believe that this ability to block interoperability/middleware with law has been a major (perhaps the most major) reason “big tech” is so big. They’re able to use these laws to block out the kinds of companies who would make the market more competitive and pull down some the walls of walled gardens.

That brings us to this lawsuit.

Ethan Zuckerman has spent years trying to make the internet a better, more open space (partially, I think, in penance for creating the world’s first pop-up internet ad). He’s been doing some amazing work on reimagining the digital public infrastructure, which I keep meaning to write about, but never quite find the time to get to.

According to the lawsuit, he wants to build a tool called “Unfollow Everything 2.0.” The tool is based on a similar tool, also called Unfollow Everything, that was built by Louis Barclay a few years ago and did what it says on the tin: let you automatically unfollow everything on Facebook. Facebook sent Barclay a legal threat letter and banned him for life from the site.

Zuckerman wants to recreate the tool with some added features enabling users to opt-in to provide some data to researchers about the impact of not following anyone on social media. But he’s concerned that he’d face legal threats from Meta, given what happened with Barclay.

Using Unfollow Everything 2.0, Professor Zuckerman plans to conduct an academic research study of how turning off the newsfeed affects users’ Facebook experience. The study is opt-in—users may use the tool without participating in the study. Those who choose to participate will donate limited and anonymized data about their Facebook usage. The purpose of the study is to generate insights into the impact of the newsfeed on user behavior and well-being: for example, how does accessing Facebook without the newsfeed change users’ experience? Do users experience Facebook as less “addictive”? Do they spend less time on the platform? Do they encounter a greater variety of other users on the platform? Answering these questions will help Professor Zuckerman, his team, and the public better understand user behavior online and the influence that platform design has on that behavior

The tool and study are nearly ready to launch. But Professor Zuckerman has not launched them because of the near certainty that Meta will pursue legal action against him for doing so.

So he’s suing for declaratory judgment that he’s not violating any laws. If he were just suing for declaratory judgment over the CFAA, that would (maybe?) be somewhat understandable or conventional. But, while that argument is in the lawsuit, the main claim in the case is something very, very different. It’s using a part of Section 230, section (c)(2)(B), that almost never gets mentioned, let alone tested.

Most Section 230 lawsuits involve (c)(1): the famed “26 words” that state “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Some Section 230 cases involve (c)(2)(A) which states that “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Many people incorrectly think that Section 230 cases turn on this part of the law, when really, much of those cases are already cut off by (c)(1) because they try to treat a service as a speaker or publisher.

But then there’s (c)(2)(B), which says:

No provider or user of an interactive computer service shall be held liable on account of any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)

As noted, this basically never comes up in cases. But the argument being made here is that this creates some sort of proactive immunity from lawsuits for middleware creators who are building tools (“technical means”) to “restrict access.” In short: does Section 230 protect “Unfollow Everything” from basically any legal threats from Meta, because it’s building a tool to restrict access to content on Meta platforms?

Or, according to the lawsuit:

This provision would immunize Professor Zuckerman from civil liability for designing, releasing, and operating Unfollow Everything 2.0

First, in operating Unfollow Everything 2.0, Professor Zuckerman would qualify as a “provider . . . of an interactive computer service.” The CDA defines the term “interactive computer service” to include, among other things, an “access software provider that provides or enables computer access by multiple users to a computer server,” id. § 230(f)(2), and it defines the term “access software provider” to include providers of software and tools used to “filter, screen, allow, or disallow content.” Professor Zuckerman would qualify as an “access software provider” because Unfollow Everything 2.0 enables the filtering of Facebook content—namely, posts that would otherwise appear in the feed on a user’s homepage. And he would “provide[] or enable[] computer access by multiple users to a computer server” by allowing users who download Unfollow Everything 2.0 to automatically unfollow and re-follow friends, groups, and pages; by allowing users who opt into the research study to voluntarily donate certain data for research purposes; and by offering online updates to the tool.

Second, Unfollow Everything 2.0 would enable Facebook users who download it to restrict access to material they (and Zuckerman) find “objectionable.” Id. § 230(c)(2)(A). The purpose of the tool is to allow users who find the newsfeed objectionable, or who find the specific sequencing of posts within their newsfeed objectionable, to effectively turn off the feed.

I’ve been talking to a pretty long list of lawyers about this and I’m somewhat amazed at how this seems to have taken everyone by surprise. Normally, when new lawsuits come out, I’ll gut check my take on it with a few lawyers and they’ll all agree with each other whether I’m heading in the right direction or the totally wrong direction. But here… the reactions were all over the map, and not in any discernible pattern. More than one person I spoke to started by suggesting that this was a totally crazy legal theory, only to later come back and say “well, maybe it actually makes some sense.”

It could be a trojan horse that no one noticed in Section 230 that effectively bars websites from taking legal action against middleware providers who are providing technical means for people to filter or screen content on their feed. Now, it’s important to note that it does not bar those companies from putting in place technical measures to block such tools, or just banning accounts or whatever. But that’s very different from threatening or filing civil suits.

If this theory works, it could do a lot to enable these kinds of middleware services and make it significantly harder for big social media companies like Meta to stop them. If you believe in adversarial interoperability, that could be a very big deal. Like, “shift the future of the internet we all use” kind of big.

Now, there are many hurdles before we get to that point. And there are some concerns that if this legal theory succeeds, it could also lead to other problematic results (though I’m less convinced by those).

Let’s start with the legal concerns.

First, as noted, this is a very novel and untested legal theory. Upon reading the case initially, my first reaction was that it felt like one of those slightly wacky academic law journal articles you see law professors write sometimes, with some far-out theory they have that no one’s ever really thought about. This one is in the form of a lawsuit, so at some point we’ll find out how the theory works.

But that alone might make a judge unwilling to go down this path.

Then there are some more practical concerns. Is there even standing here? ¯\_(ツ)_/¯ Zuckerman hasn’t released his tool. Meta hasn’t threatened him. He makes a credible claim that given Meta’s past actions, they’re likely to react unfavorably, but is that enough to get standing?

Then there’s the question of whether or not you can even make use of 230 in an affirmative way like this. 230 is used as a defense to get cases thrown out, not proactively for declaratory judgment.

Also, this is not my area of expertise by any stretch of the imagination, but I remember hearing in the past that outside of IP law, courts (and especially courts in the 9th Circuit) absolutely disfavor lawsuits for declaratory judgment (i.e., a lawsuit before there’s any controversy, where you ask the court “hey, can you just check and make sure I’m on the right side of the law here…”). So I could totally see the judge saying “sorry, this is not a proper use of our time” and tossing it. In fact, that might be the most likely result.

Then there’s this kinda funny but possibly consequential issue: there’s a typo in Section 230 that almost everyone has ignored for years. Because it’s never really mattered. Except it matters in this case. Jeff Kosseff, the author of the book on Section 230, always likes to highlight that in (c)(2)(B), it says that the immunity is for using “the technical means to restrict access to material described in paragraph (1).”

But they don’t mean “paragraph (1).” They mean “paragraph (A).” Paragraph (1) is the “26 words” and does not describe any material, so it would make no sense to say “material described in paragraph (1).” It almost certainly means “paragraph (A),” which is the “good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” section. That’s the one that describes material.

I know that, at times, Jeff has joked when people ask him how 230 should be reformed he suggests they fix the typo. But Congress has never listened.

And now it might matter?

The lawsuit basically pretends that the typo isn’t there. Its language inserts the language from “paragraph (A)” where the law says “paragraph (1).”

I don’t know how that gets handled. Perhaps it gets ignored like every time Jeff points out the typo? Perhaps it becomes consequential? Who knows!

There are a few other oddities here, but this article is getting long enough and has mostly covered the important points. However, I will conclude on one other point that one of the people I spoke to raised. As discussed above, Meta has spent most of the past dozen or so years going legally ballistic about anyone trying to scrape or data mine its properties in anyway.

Yet, earlier this year, it somewhat surprisingly bailed out on a case where it had sued Bright Data for scraping/data mining. Lawyer Kieran McCarthy (who follows data scraping lawsuits like no one else) speculated that Meta’s surprising about-face may be because it suddenly realized that for all of its AI efforts, it’s been scraping everyone else. And maybe someone high up at Meta suddenly realized how it was going to look in court when it got sued for all the AI training scraping, if the plaintiffs point out that at the very same time it was suing others for scraping its properties.

For me, I suspect the decision not to appeal might be more about a shift in philosophy by Meta and perhaps some of the other big platforms than it is about their confidence in their ability to win this case. Today, perhaps more important to Meta than keeping others off their public data is having access to everyone else’s public data. Meta is concerned that their perceived hypocrisy on these issues might just work against them. Just last month, Meta had its success in prior scraping cases thrown back in their face in a trespass to chattels case. Perhaps they were worried here that success on appeal might do them more harm than good.

In short, I think Meta cares more about access to large volumes of data and AI than it does about outsiders scraping their public data now. My hunch is that they know that any success in anti-scraping cases can be thrown back at them in their own attempts to build AI training databases and LLMs. And they care more about the latter than the former.

I’ve separately spoken to a few experts who were worried about the consequences if Zuckerman succeeded here. They were worried that it might simultaneously immunize potential bad actors. Specifically, you could see a kind of Cambridge Analytica or Clearview AI situation, where companies trying to get access to data for malign purposes convince people to install their middleware app. This could lead to a massive expropriation of data, and possibly some very sketchy services as a result.

But I’m less worried about that, mainly because it’s the sketchy eventuality of how that data is being used that would still (hopefully?) violate certain laws, not the access to the data itself. Still, there are at least some questions being raised about how this type of more proactive immunity might result in immunizing bad actors that is at least worth thinking about.

Either way, this is going to be a case worth following.

  • ✇IEEE Spectrum
  • An Engineer Who Keeps Meta’s AI infrastructure HummingEdd Gent
    Making breakthroughs in artificial intelligence these days requires huge amounts of computing power. In January, Meta CEO Mark Zuckerberg announced that by the end of this year, the company will have installed 350,000 Nvidia GPUs—the specialized computer chips used to train AI models—to power its AI research. As a data-center network engineer with Meta’s network infrastructure team, Susana Contrera is playing a leading role in this unprecedented technology rollout. Her job is about “bringing
     

An Engineer Who Keeps Meta’s AI infrastructure Humming

Od: Edd Gent
29. Duben 2024 v 17:00


Making breakthroughs in artificial intelligence these days requires huge amounts of computing power. In January, Meta CEO Mark Zuckerberg announced that by the end of this year, the company will have installed 350,000 Nvidia GPUs—the specialized computer chips used to train AI models—to power its AI research.

As a data-center network engineer with Meta’s network infrastructure team, Susana Contrera is playing a leading role in this unprecedented technology rollout. Her job is about “bringing designs to life,” she says. Contrera and her colleagues take high-level plans for the company’s AI infrastructure and turn those blueprints into reality by working out how to wire, power, cool, and house the GPUs in the company’s data centers.

Susana Contrera


Employer:

Meta

Occupation:

Data-center network engineer

Education:

Bachelor’s degree in telecommunications engineering, Andrés Bello Catholic University in Caracas, Venezuela

Contrera, who now works remotely from Florida, has been at Meta since 2013, spending most of that time helping to build the computer systems that support its social media networks, including Facebook and Instagram. But she says that AI infrastructure has become a growing priority, particularly in the past two years, and represents an entirely new challenge. Not only is Meta building some of the world’s first AI supercomputers, it is racing against other companies like Google and OpenAI to be the first to make breakthroughs.

“We are sitting right at the forefront of the technology,” Contrera says. “It’s super challenging, but it’s also super interesting, because you see all these people pushing the boundaries of what we thought we could do.”

Cisco Certification Opened Doors

Growing up in Caracas, Venezuela, Contrera says her first introduction to technology came from playing video games with her older brother. But she decided to pursue a career in engineering because of her parents, who were small-business owners.

“They were always telling me how technology was going to be a game changer in the future, and how a career in engineering could open many doors,” she says.

She enrolled at Andrés Bello Catholic University in Caracas in 2001 to study telecommunications engineering. In her final year, she signed up for the training and certification program to become a Cisco Certified Network Associate. The program covered topics such as the fundamentals of networking and security, IP services, and automation and programmability.

The certificate opened the door to her first job in 2006—managing the computer network of a business-process outsourcing company, Atento, in Caracas.

“Getting your hands dirty can give you a lot of perspective.”

“It was a very large enterprise network that had just the right amount of complexity for a very small team,” she says. “That gave me a lot of freedom to put my knowledge into practice.”

At the time, Venezuela was going through a period of political unrest. Contrera says she didn’t see a future for herself in the country, so she decided to leave for Europe.

She enrolled in a master’s degree program in project management in 2009 at Spain’s Pontifical University of Salamanca, continuing to collect additional certifications through Cisco in her free time. In 2010, partway through the program, she left for a job as a support engineer at the Madrid-based law firm Ecija, which provides legal advice to technology, media, and telecommunications companies. Following that with a stint as a network engineer at Amazon’s facility in Dublin from 2011 to 2013, she then joined Meta and “the rest is history,” she says.

Starting From the Edge Network

Contrera first joined Meta as a network deployment engineer, helping build the company’s “edge” network. In this type of network design, user requests go out to small edge servers dotted around the world instead of to Meta’s main data centers. Edge systems can deal with requests faster and reduce the load on the company’s main computers.

After several years traveling around Europe setting up this infrastructure, she took a managerial position in 2016. But after a couple of years she decided to return to a hands-on role at the company.

“I missed the satisfaction that you get when you’re part of a project, and you can clearly see the impact of solving a complex technical problem,” she says.

Because of the rapid growth of Meta’s services, her work primarily involved scaling up the capacity of its data centers as quickly as possible and boosting the efficiency with which data flowed through the network. But the work she is doing today to build out Meta’s AI infrastructure presents very different challenges, she says.

Designing Data Centers for AI

Training Meta’s largest AI models involves coordinating computation over large numbers of GPUs split into clusters. These clusters are often housed in different facilities, often in distant cities. It’s crucial that messages passing back and forth have very low latency and are lossless—in other words, they move fast and don’t drop any information.

Building data centers that can meet these requirements first involves Meta’s network engineering team deciding what kind of hardware should be used and how it needs to be connected.

“They have to think about how those clusters look from a logical perspective,” Contrera says.

Then Contrera and other members of the network infrastructure team take this plan and figure out how to fit it into Meta’s existing data centers. They consider how much space the hardware needs, how much power and cooling it will require, and how to adapt the communications systems to support the additional data traffic it will generate. Crucially, this AI hardware sits in the same facilities as the rest of Meta’s computing hardware, so the engineers have to make sure it doesn’t take resources away from other important services.

“We help translate these ideas into the real world,” Contrera says. “And we have to make sure they fit not only today, but they also make sense for the long-term plans of how we are scaling our infrastructure.”

Working on a Transformative Technology

Planning for the future is particularly challenging when it comes to AI, Contrera says, because the field is moving so quickly.

“It’s not like there is a road map of how AI is going to look in the next five years,” she says. “So we sometimes have to adapt quickly to changes.”

With today’s heated competition among companies to be the first to make AI advances, there is a lot of pressure to get the AI computing infrastructure up and running. This makes the work much more demanding, she says, but it’s also energizing to see the entire company rallying around this goal.

While she sometimes gets lost in the day-to-day of the job, she loves working on a potentially transformative technology. “It’s pretty exciting to see the possibilities and to know that we are a tiny piece of that big puzzle,” she says.

Hands-on Data Center Experience

For those interested in becoming a network engineer, Contrera says the certification programs run by companies like Cisco are useful. But she says it’s also important not to focus just on simply ticking boxes or rushing through courses just to earn credentials. “Take your time to understand the topics because that’s where the value is,” she says.

It’s good to get some experience working in data centers on infrastructure deployment, she says, because “getting your hands dirty can give you a lot of perspective.” And increasingly, coding can be another useful skill to develop to complement more traditional network engineering capabilities.

Mainly, she says, just “enjoy the ride” because networking can be a truly fascinating topic once you delve in. “There’s this orchestra of protocols and different technologies playing together and interacting,” she says. “I think that’s beautiful.”

  • ✇Ars Technica - All content
  • Over 100 far-right militias are coordinating on FacebookWIRED
    Enlarge (credit: NurPhoto via Getty) “Join Your Local Militia or III% Patriot Group,” a post urged the more than 650 members of a Facebook group called the Free American Army. Accompanied by the logo for the Three Percenters militia network and an image of a man in tactical gear holding a long rifle, the post continues: “Now more than ever. Support the American militia page.” Other content and messaging in the group is similar. And despite the fact that Facebook bans paramili
     

Over 100 far-right militias are coordinating on Facebook

Od: WIRED
3. Květen 2024 v 15:40
Far-right extremists

Enlarge (credit: NurPhoto via Getty)

“Join Your Local Militia or III% Patriot Group,” a post urged the more than 650 members of a Facebook group called the Free American Army. Accompanied by the logo for the Three Percenters militia network and an image of a man in tactical gear holding a long rifle, the post continues: “Now more than ever. Support the American militia page.”

Other content and messaging in the group is similar. And despite the fact that Facebook bans paramilitary organizing and deemed the Three Percenters an “armed militia group" on its 2021 Dangerous Individuals and Organizations List, the post and group remained up until WIRED contacted Meta for comment about its existence.

Free American Army is just one of around 200 similar Facebook groups and profiles, most of which are still live, that anti-government and far-right extremists are using to coordinate local militia activity around the country.

Read 35 remaining paragraphs | Comments

  • ✇PCGamesN
  • Valve Index is in trouble as Meta Quest 3 closes Steam VR headset gapSamuel Willetts
    In the months since its launch, the Meta Quest 3 has proven particularly popular among Steam users, eclipsing most other VR headsets. While its predecessor, the Quest 2, and Valve Index have thus far proven the only difficult roadblocks in the Quest 3's path to dominance, it seems all but certain that this is set to change. Continue reading Valve Index is in trouble as Meta Quest 3 closes Steam VR headset gap MORE FROM PCGAMESN: Steam FAQ, Steam family sharing, Steam
     

Valve Index is in trouble as Meta Quest 3 closes Steam VR headset gap

3. Květen 2024 v 13:12
Valve Index is in trouble as Meta Quest 3 closes Steam VR headset gap

In the months since its launch, the Meta Quest 3 has proven particularly popular among Steam users, eclipsing most other VR headsets. While its predecessor, the Quest 2, and Valve Index have thus far proven the only difficult roadblocks in the Quest 3's path to dominance, it seems all but certain that this is set to change.

MORE FROM PCGAMESN: Steam FAQ, Steam family sharing, Steam in-home streaming
  • ✇Android Authority
  • Meta’s supercharged AI assistant is taking over its apps across the worldRushil Agrawal
    Credit: Edgar Cervantes / Android Authority Meta has dramatically upgraded its AI assistant, powered by the new Llama 3 language model. The enhanced Meta AI is available on Facebook, Instagram, WhatsApp, Messenger, and its new standalone website. Meta also announced the rollout of an upgraded image generator, where images change in real-time as you type text descriptions. The battle for AI supremacy between ChatGPT and Gemini just turned into a three-way race as Meta has unveiled a sign
     

Meta’s supercharged AI assistant is taking over its apps across the world

19. Duben 2024 v 01:46

Meta logo on smartphone stock photo (13)

Credit: Edgar Cervantes / Android Authority

  • Meta has dramatically upgraded its AI assistant, powered by the new Llama 3 language model.
  • The enhanced Meta AI is available on Facebook, Instagram, WhatsApp, Messenger, and its new standalone website.
  • Meta also announced the rollout of an upgraded image generator, where images change in real-time as you type text descriptions.


The battle for AI supremacy between ChatGPT and Gemini just turned into a three-way race as Meta has unveiled a significantly upgraded version of its AI assistant. The new-and-improved Meta AI, powered by the cutting-edge Llama 3 language model, is boldly proclaimed by CEO Mark Zuckerberg to be “now the most intelligent AI assistant that you can freely use.”

Meta first introduced Meta AI last year, but it was limited to the US, and its capabilities were not on par with those of competitors like ChatGPT and Gemini. However, the integration of the Llama 3 model, also announced today, represents a seemingly quantum leap for Meta’s AI. Benchmark tests conducted by the company indicate that, with Llama 3 at its core, Meta AI has the potential to outperform other top-tier AI models, particularly in areas like translation, dialogue generation, and complex reasoning.

What can the new Meta AI do?

The overhauled Meta AI is now directly accessible through the search bars within Facebook, Instagram (DMs page), WhatsApp, and Messenger. Plus, you can access it at a new standalone website, meta.ai. It can search the web for you, provide recommendations for restaurants, flights, and more, or clarify complex concepts. On Facebook, Meta AI can also interact with your feed, allowing you to ask questions about content you see, like the best time to catch the Northern Lights after viewing a stunning photo of them.

I could even ask it for tailored content like fitness reels or comedy videos, with Meta AI curating a 5-video feed for me in those cases. Meta has included many prompts under the search bar on Facebook and Instagram to help us get the most out of Meta AI’s abilities. Thankfully, we can still use the search bars for regular searches for accounts and hashtags.

From what I could see so far, Meta AI’s answers are not as nuanced and detailed as what Gemini would give me for similar questions, but it could benefit from two key strengths. First, it’s seamlessly embedded within Meta’s popular apps — Facebook, Instagram, WhatsApp, and Messenger — giving their billions of users convenient access to the AI assistant. Secondly, Meta AI isn’t tied to one search engine; it openly uses both Google and Bing to process queries, removing potential bias toward either company’s algorithms.

One of the most intriguing parts of Meta AI is its Imagine image generator. This feature first appeared within WhatsApp a few months ago and allowed users to create AI-generated images based on text descriptions. Since then, it has expanded to Instagram and Facebook.

Starting today, WhatsApp beta users and those using Meta AI’s desktop website in the US can try out an even more advanced version of Imagine. This version generates images in real-time while you type, with the image updating as you add more details, really demonstrating how far generative AI has come.

Currently, Meta AI works in English and is rolling out to many countries outside the US, including Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe, with more on the way.

  • ✇Android Police
  • WhatsApp has a note-pinning feature coming for your contactsHamid Ganji
    While messaging apps were initially aimed at serving personal users, they soon became a desired tool among businesses. With that in mind, popular messaging apps like WhatsApp were prompted to add more business-driven features, including Communities, business profiles, and Facebook Shops Integration. However, the WhatsApp platform still has significant untapped potential to incorporate more business features.
     

WhatsApp has a note-pinning feature coming for your contacts

22. Duben 2024 v 18:11

While messaging apps were initially aimed at serving personal users, they soon became a desired tool among businesses. With that in mind, popular messaging apps like WhatsApp were prompted to add more business-driven features, including Communities, business profiles, and Facebook Shops Integration. However, the WhatsApp platform still has significant untapped potential to incorporate more business features.

  • ✇Pocketables
  • Meta AI is a thing now, if you’re interested in a barely-connected AiPaul E King
    Meta has thrown their version of Ai into the fray against ChatGPT and Gemini with a no-account-required Ai that is accessible at meta.ai. Signing in using Facebook allows you to see previous chats and presumably remember other interactions. Meta’s image generation is surprising as it generates images while you’re typing. It generates slightly faster than I can type so there’s something at every word. It also has the ability to create a video of quite a few of the changes it made during
     

Meta AI is a thing now, if you’re interested in a barely-connected Ai

19. Duben 2024 v 21:58

Meta has thrown their version of Ai into the fray against ChatGPT and Gemini with a no-account-required Ai that is accessible at meta.ai. Signing in using Facebook allows you to see previous chats and presumably remember other interactions.

Meta’s image generation is surprising as it generates images while you’re typing. It generates slightly faster than I can type so there’s something at every word.

Meta AI image

It also has the ability to create a video of quite a few of the changes it made during creating your final image. Just from this it missed a chunk, but I’m not sure how much AI bandwidth actually needs to be devoted to my attempts at creating a bar scene with a guinea pig and a cockatiel looking at an HTC EVO 4G and checking social media while a kung fu fight is breaking out behind them.

As LLMs go it’s really fast, which I enjoy after watching Copilot (chatgpt) slowly type out the answers.

I threw some questions at Meta Ai, most were “help me remember this” but it seems like there’s a popular culture filter it’s seeing the world through (in other words, a couple of old novellas I was attempting to figure out the names of it did not have much of a clue on.)

Side note – if you happen to have read a short story about a person being introduced to an Ai that generates historical figures that teach us that our pronunciation of Latin is incorrect and that AI Napoleon must be banned from internet access, drop me a line. I couldn’t get any of the 3 major AIs to look into old Sci-Fi.

The instant image generation makes this amazing, the lack of internet and ability to research sort of puts it into my “check back later” category because I need current, not late 2022. My needs, however are not yours. This does some terrific image generation, text generation, but can’t do my research for me. This may be incorrect, it does appear to have some 2024 info I’m just not seeing much of it in my queries.

Meta AI generated photo
There’s an AI generated stamp in the bottom right that I did not intentionally blur, just something about getting this into WordPress via screenshot = blur. I suspect you’ll be looking for that blur on Facebook soon enough.

The hands are, as AI goes these days, terrible. The above image sort of got it right but most have hallucinating hands, elbows, etc. The dirty picture filtering algorithm is in full effect and somewhat laughable. Try and put a semi-transparent hat on someone doing yoga, it blurs things out because obviously transparent hats on women already doing yoga mean I’m going for nudity. But imaging a transparent hats with a woman under them doing yoga is fine.

Meta AI claims to have video generation via /video… however when I use it it just looks up videos that are on Facebook or Instagram… which it does appear to be connected to. Attempts to search for videos that I produced that are on Instagram or Facebook failed however… not really sure how this video searching is going.

Pretty awesome, needs some work, but all of them do. For now, as much as I dislike it, copilot is what’s working for current news, Meta appears to be seeing information from a couple of months back (at least on my site,) Copilot sees info from us a week ago. That said, Meta does not appear to have hallucinated anything other than hands.

You can try it at Meta.AI

Meta AI is a thing now, if you’re interested in a barely-connected Ai by Paul E King first appeared on Pocketables.

💾

This is from April 19th, and is not the image that I ended up with, but it's an interesting video showcasing what the Meta image generator spewed out as I ty...
  • ✇Techdirt
  • 96% Of Hospitals Share Sensitive Visitor Data With Meta, Google, and Data BrokersKarl Bode
    I’ve mentioned more than a few times how the singular hyperventilation about TikTok is kind of silly distraction from the fact that the United States is too corrupt to pass a modern privacy law, resulting in no limit of dodgy behavior, abuse, and scandal. We have no real standards thanks to corruption, and most people have no real idea of the scale of the dysfunction. Case in point: a new study out of the University of Pennsylvania (hat tip to The Register) analyzed a nationally representative
     

96% Of Hospitals Share Sensitive Visitor Data With Meta, Google, and Data Brokers

Od: Karl Bode
22. Duben 2024 v 14:23

I’ve mentioned more than a few times how the singular hyperventilation about TikTok is kind of silly distraction from the fact that the United States is too corrupt to pass a modern privacy law, resulting in no limit of dodgy behavior, abuse, and scandal. We have no real standards thanks to corruption, and most people have no real idea of the scale of the dysfunction.

Case in point: a new study out of the University of Pennsylvania (hat tip to The Register) analyzed a nationally representative sample of 100 U.S. hospitals, and found that 96 percent of them were doling out sensitive user visitor data to Google, Meta, and a vast coalition of dodgy data brokers.

Hospitals, it should be clear, aren’t legally required to publish website privacy policies that clearly detail how and with whom they share visitor data. Again, because we’re too corrupt as a country to require and enforce such requirements. The FTC does have some jurisdiction, but it’s too short staffed and under-funded (quite intentionally) to tackle the real scope of U.S. online privacy violations.

So the study found that a chunk of these hospital websites didn’t even have a privacy policy. And of the ones that did, about half the time the over-verbose pile of ambiguous and intentionally confusing legalese didn’t really inform visitors that their data was being transferred to a long list of third parties. Or, for that matter, who those third-parties even are:

“…we found that although 96.0% of hospital websites exposed users to third-party tracking, only 71.0% of websites had an available website privacy policy…Only 56.3% of policies (and only 40 hospitals overall) identified specific third-party recipients.”

Data in this instance can involve everything including email and IP addresses, to what you clicked on, what you researched, demographic info, and location. This was all a slight improvement from a study they did a year earlier showing that 98 percent of hospital websites shared sensitive data with third parties. The professors clearly knew what to expect, but were still disgusted in comments to The Register:

“It’s shocking, and really kind of incomprehensible,” said Dr Ari Friedman, an assistant professor of emergency medicine at the University of Pennsylvania. “People have cared about health privacy for a really, really, really long time.” It’s very fundamental to human nature. Even if it’s information that you would have shared with people, there’s still a loss, just an intrinsic loss, when you don’t even have control over who you share that information with.”

If this data is getting into the hands of dodgy international and unregulated data brokers, there’s no limit of places it can end up. Brokers collect a huge array of demographic, behavioral, and location data, use it to create detailed profiles of individuals, then sell access in a million different ways to a long line of additional third parties, including the U.S. government and foreign intelligence agencies.

There should be hard requirements about transparent, clear, and concise notifications of exactly what data is being collected and sold and to whom. There should be hard requirements that users have the ability to opt out (or, preferably in the cases of sensitive info, opt in). There should be hard punishment for companies and executives that play fast and loose with consumer data.

And we have none of that because our lawmakers decided, repeatedly, that making money was more important than market health, consumer welfare, and public safety. The result has been a parade of scandals that skirt ever closer to people being killed, at scale.

So again, the kind of people that whine about the singular privacy threat that is TikTok (like say FCC Commissioner Brendan Carr, or Senator Marsha Blackburn) — but have nothing to say about the much broader dysfunction created by rampant corruption — are advertising they either don’t know what they’re talking about, or aren’t addressing the full scope of the problem in good faith.

  • ✇Android Authority
  • Meta’s supercharged AI assistant is taking over its apps across the worldRushil Agrawal
    Credit: Edgar Cervantes / Android Authority Meta has dramatically upgraded its AI assistant, powered by the new Llama 3 language model. The enhanced Meta AI is available on Facebook, Instagram, WhatsApp, Messenger, and its new standalone website. Meta also announced the rollout of an upgraded image generator, where images change in real-time as you type text descriptions. The battle for AI supremacy between ChatGPT and Gemini just turned into a three-way race as Meta has unveiled a sign
     

Meta’s supercharged AI assistant is taking over its apps across the world

19. Duben 2024 v 01:46

Meta logo on smartphone stock photo (13)

Credit: Edgar Cervantes / Android Authority

  • Meta has dramatically upgraded its AI assistant, powered by the new Llama 3 language model.
  • The enhanced Meta AI is available on Facebook, Instagram, WhatsApp, Messenger, and its new standalone website.
  • Meta also announced the rollout of an upgraded image generator, where images change in real-time as you type text descriptions.


The battle for AI supremacy between ChatGPT and Gemini just turned into a three-way race as Meta has unveiled a significantly upgraded version of its AI assistant. The new-and-improved Meta AI, powered by the cutting-edge Llama 3 language model, is boldly proclaimed by CEO Mark Zuckerberg to be “now the most intelligent AI assistant that you can freely use.”

Meta first introduced Meta AI last year, but it was limited to the US, and its capabilities were not on par with those of competitors like ChatGPT and Gemini. However, the integration of the Llama 3 model, also announced today, represents a seemingly quantum leap for Meta’s AI. Benchmark tests conducted by the company indicate that, with Llama 3 at its core, Meta AI has the potential to outperform other top-tier AI models, particularly in areas like translation, dialogue generation, and complex reasoning.

What can the new Meta AI do?

The overhauled Meta AI is now directly accessible through the search bars within Facebook, Instagram (DMs page), WhatsApp, and Messenger. Plus, you can access it at a new standalone website, meta.ai. It can search the web for you, provide recommendations for restaurants, flights, and more, or clarify complex concepts. On Facebook, Meta AI can also interact with your feed, allowing you to ask questions about content you see, like the best time to catch the Northern Lights after viewing a stunning photo of them.

I could even ask it for tailored content like fitness reels or comedy videos, with Meta AI curating a 5-video feed for me in those cases. Meta has included many prompts under the search bar on Facebook and Instagram to help us get the most out of Meta AI’s abilities. Thankfully, we can still use the search bars for regular searches for accounts and hashtags.

From what I could see so far, Meta AI’s answers are not as nuanced and detailed as what Gemini would give me for similar questions, but it could benefit from two key strengths. First, it’s seamlessly embedded within Meta’s popular apps — Facebook, Instagram, WhatsApp, and Messenger — giving their billions of users convenient access to the AI assistant. Secondly, Meta AI isn’t tied to one search engine; it openly uses both Google and Bing to process queries, removing potential bias toward either company’s algorithms.

One of the most intriguing parts of Meta AI is its Imagine image generator. This feature first appeared within WhatsApp a few months ago and allowed users to create AI-generated images based on text descriptions. Since then, it has expanded to Instagram and Facebook.

Starting today, WhatsApp beta users and those using Meta AI’s desktop website in the US can try out an even more advanced version of Imagine. This version generates images in real-time while you type, with the image updating as you add more details, really demonstrating how far generative AI has come.

Currently, Meta AI works in English and is rolling out to many countries outside the US, including Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe, with more on the way.

  • ✇Pocketables
  • Meta AI is a thing now, if you’re interested in a barely-connected AiPaul E King
    Meta has thrown their version of Ai into the fray against ChatGPT and Gemini with a no-account-required Ai that is accessible at meta.ai. Signing in using Facebook allows you to see previous chats and presumably remember other interactions. Meta’s image generation is surprising as it generates images while you’re typing. It generates slightly faster than I can type so there’s something at every word. It also has the ability to create a video of quite a few of the changes it made during
     

Meta AI is a thing now, if you’re interested in a barely-connected Ai

19. Duben 2024 v 21:58

Meta has thrown their version of Ai into the fray against ChatGPT and Gemini with a no-account-required Ai that is accessible at meta.ai. Signing in using Facebook allows you to see previous chats and presumably remember other interactions.

Meta’s image generation is surprising as it generates images while you’re typing. It generates slightly faster than I can type so there’s something at every word.

Meta AI image

It also has the ability to create a video of quite a few of the changes it made during creating your final image. Just from this it missed a chunk, but I’m not sure how much AI bandwidth actually needs to be devoted to my attempts at creating a bar scene with a guinea pig and a cockatiel looking at an HTC EVO 4G and checking social media while a kung fu fight is breaking out behind them.

As LLMs go it’s really fast, which I enjoy after watching Copilot (chatgpt) slowly type out the answers.

I threw some questions at Meta Ai, most were “help me remember this” but it seems like there’s a popular culture filter it’s seeing the world through (in other words, a couple of old novellas I was attempting to figure out the names of it did not have much of a clue on.)

Side note – if you happen to have read a short story about a person being introduced to an Ai that generates historical figures that teach us that our pronunciation of Latin is incorrect and that AI Napoleon must be banned from internet access, drop me a line. I couldn’t get any of the 3 major AIs to look into old Sci-Fi.

The instant image generation makes this amazing, the lack of internet and ability to research sort of puts it into my “check back later” category because I need current, not late 2022. My needs, however are not yours. This does some terrific image generation, text generation, but can’t do my research for me. This may be incorrect, it does appear to have some 2024 info I’m just not seeing much of it in my queries.

Meta AI generated photo
There’s an AI generated stamp in the bottom right that I did not intentionally blur, just something about getting this into WordPress via screenshot = blur. I suspect you’ll be looking for that blur on Facebook soon enough.

The hands are, as AI goes these days, terrible. The above image sort of got it right but most have hallucinating hands, elbows, etc. The dirty picture filtering algorithm is in full effect and somewhat laughable. Try and put a semi-transparent hat on someone doing yoga, it blurs things out because obviously transparent hats on women already doing yoga mean I’m going for nudity. But imaging a transparent hats with a woman under them doing yoga is fine.

Meta AI claims to have video generation via /video… however when I use it it just looks up videos that are on Facebook or Instagram… which it does appear to be connected to. Attempts to search for videos that I produced that are on Instagram or Facebook failed however… not really sure how this video searching is going.

Pretty awesome, needs some work, but all of them do. For now, as much as I dislike it, copilot is what’s working for current news, Meta appears to be seeing information from a couple of months back (at least on my site,) Copilot sees info from us a week ago. That said, Meta does not appear to have hallucinated anything other than hands.

You can try it at Meta.AI

Meta AI is a thing now, if you’re interested in a barely-connected Ai by Paul E King first appeared on Pocketables.

💾

This is from April 19th, and is not the image that I ended up with, but it's an interesting video showcasing what the Meta image generator spewed out as I ty...
❌
❌