Google is enhancing Gmail’s Help me write feature with a new “Polish” option that can help you draft a formal email from rough notes with a tap.
Gmail for Android and iOS is also getting new Help me write and Refine my draft shortcuts, making the tools easier to use.
These new features are widely available for select Google Workspace subscribers.
Google rolled out significant upgrades for Gmail’s Help me write feature earlier this year, giving users the ability to dictate text prompts to easily draft an email. The company also previewed a new “Polish” feature that could fix drafts to create a structured email at the touch of a button. This feature is now rolling out to users along with new Help me write and Refine my draft shortcuts for Gmail on Android and iOS.
Google announced the rollout in a recent Workspace Updates blog, revealing that the new Polish option will be available as part of Help me write’s Refine my draft feature on Gmail for mobile and web. With this feature, you’ll be able to enter rough notes into a draft and Gemini will “turn the content into a completely formal draft, ready for you to review in one click.”
Since Gemini Live became available to me on my Pixel 8 Pro late last week, I’ve found myself using it very often. Not because it’s the latest and hottest trend, no, but because almost everything I hated about talking to Google Assistant is no longer an issue with Gemini Live. The difference is staggering.
I have a lot to say about the topic, but for today, I want to focus on a few aspects that make talking to Gemini Live such a better experience compared to using Google Assistant or the regular Gemini.
1. Gemini Live understands me, the way I speak
Credit: Rita El Khoury / Android Authority
English is only my third language and even though I’ve been speaking it for decades, it’s still not the most natural language for me to use. Plus, I have the kind of brain that zips all over the place. So, every time I wanted to trigger Google Assistant, I had to think of the exact sentence or question before saying, “Hey Google.” For that reason, and that reason alone, talking to Assistant never felt natural to me. It’s always pre-meditated, and it always requires me to pause what I’m doing and give it my full attention.
Google Assistant wants me to speak like a robot to fit its mold. Gemini Live lets me speak however I want.
Gemini Live understands natural human speech. For me, it works around my own speech’s idiosyncracies, so I can start speaking without thinking or preparing my full question beforehand. I can “uhm” and “ah” mid-sentence, repeat myself, turn around the main question, and figure things out as I speak, and Live will still understand all of that.
I can even ask multiple questions and be as vague or as precise as possible. There’s really no restriction around how to speak or what to say, no specific commands, no specific ways to phrase questions — just no constraints whatsoever. That completely changes the usability of AI chatbots for me.
2. This is what real, continuous conversations should be like
Credit: Rita El Khoury / Android Authority
Google Assistant added a setting for Continuous Conversations many years ago, but that never felt natural or all that continuous. I’d say “Hey Google,” ask it for something, wait for the full answer, wait an extra second for it to start listening again, and then say my second command. If I stay silent for a couple of seconds, the conversation is done and I have to re-trigger Assistant again.
Plus, Assistant treats every command separately. There’s no real ‘chat’ feeling, just a series of independent questions or commands and answers.
Interruptions, corrections, clarifications, idea continuity, topic changes — Gemini Live handles all of those.
Gemini Live works differently. Every session is a real open conversation, where I can talk back and forth for a while, and it still remembers everything that came before. So if I say I like Happy Endings and ask for similar TV show recommendations, I can listen in, then ask more questions, and it’ll keep in mind my preference for Happy Endings-like shows.
I can also interrupt it at any point in time and correct it if it misunderstood me or if the answer doesn’t satisfy me. I don’t have to manually scream at it to stop or wait for it as it drones on for two minutes with a wrong answer. I can also change the conversation topic in an instant or give it more precise questions if needed.
Plus, Gemini Live doesn’t shut off our chat after a few seconds of silence. So I can take a few seconds to properly assimilate the answer and think of other clarifications or questions to ask, you know, like a normal human, instead of a robot who has the follow-ups ready in a second.
Better yet, I can minimize Live and go use other apps while still keeping the chat going. I’ve found this excellent while browsing or chatting with friends. I can either invoke Live mid-browsing to ask questions and get clarifications about what I’m reading, or start a regular Live chat then pull up a browser to double check what Gemini is telling me.
3. TL;DR? Ask it for a summary
Credit: Rita El Khoury / Android Authority
As I mentioned earlier, every command is a separate instance for Google Assistant. Gemini Live considers an entire chat as an entity, which lets me do something I could never do with Assistant: ask for a summary.
So if I had a chat about places to run around in Paris and test the new Panorama mode on the Pixel 9 series, I can ask it for a summary in the end, and it’ll list all of them. This is incredibly helpful when trying to understand complex topics or get a list of suggestions, for example.
4. Want to talk more about a specific topic? Resume an older chat
Credit: Rita El Khoury / Android Authority
At one point, I opened Gemini Live and said something like, “Hey, can we continue our chat about Paris panorama photos?” And it said yes. I was a bit gobsmacked. So I went on, and it seemed to really know where we left off. I tried that again a few times, and it worked every time. Google Assistant just doesn’t have anything like this.
Another way to trigger this more reliably is to open Gemini, expand the full Gemini app, tap on Recents and open a previous chat. Tapping on the Gemini Live icon in the bottom right here allows you to continue an existing chat as if you never stopped it or exited it.
5. Check older chats and share them to Drive or Gmail
Credit: Rita El Khoury / Android Authority
Viewing my Google Assistant history has always been a convoluted process that requires going to my Google account, finding my personal history, and checking the last few commands I’ve done.
With Gemini, it’s so easy to open up previous Live chats and read everything that was said in them. Even better, every chat can be renamed, pinned to the top, or deleted in its entirety. Plus, every response can be copied, shared, or quickly exported to Google Docs or Gmail. This makes it easy for me to manage my Gemini Live data, delete what needs to be deleted, and share or save what I care about.
Google Assistant still has a (significant) leg up
Credit: Rita El Khoury / Android Authority
Despite everything Gemini Live does well, there are so many instances where I felt its limitations while using it. For one, the Live session is separate from the main Gemini experience, and Live only treats general knowledge questions, not personal data. So I can ask Gemini (not Live) about my calendar, send messages with it, start timers, check my Drive documents, control my smart home, and more, just as I could with Assistant, but I can’t do any of that with Gemini Live. The latter is more of a lively Google Search experience and all the regular Gemini extensions aren’t accessible in Live. Google said it was working on bringing them over, though, and that is the most exciting prospect for me.
Gemini Live still doesn't have access to personal data, calendars, smart home, music services, etc...
Because of how it’s built and what it currently does, Gemini Live requires a constant internet connection and there’s nothing you can do without it. Assistant is able to handle some basic local commands like device controls, timers, and alarms, but Gemini Live can’t.
And for now, my experience with multiple language in Gemini Live support has been iffy at best — not like Assistant’s support of multiple languages is stellar, but it works. On my phone, which is set to English (US), Gemini Live understands me only when I speak in English. I can tell it to answer in French, and it will, but it won’t understand me or recognize my words if I start speaking French. I hope Google brings in a more natural multilingual experience to it, because that could be life-changing for someone like me who thinks and talks in three languages at the same time.
Credit: Rita El Khoury / Android Authority
Logistically, my biggest issue with Gemini Live is that I can’t control it via voice yet. My “Hey Google” command opens up the main Gemini voice command interface, which is neat, but I need to manually tap the Live button to trigger a chat. And when I’m done talking, the chat doesn’t end unless I manually tap to end it. No amount of “thank you,” “that’s it,” “we’re done,” “goodbye,” or other words did the trick to end the chat. Only the red End button does.
Google Assistant was a stickler for sourcing every piece of info; Gemini Live doesn't care about sources.
Realistically, though, my biggest Gemini Live problem is that there’s no sourcing for any of the info it shares. Assistant used to be a stickler for sourcing everything; how many times have you heard say something like, “According to [website];” or, “on the [website], they say…?” Gemini Live just states facts, instead, with no immediate way to verify them. All I can do is end the chat, go to the transcript, and check for the Google button that appears below certain messages, which shows me related searches I can do to verify that info. Not very intuitive, Google, and not respectful to the millions of sites you’ve crawled to get your answer like, uh, I don’t know… Android Authority perhaps?
Google is launching several new AI features for the Pixel 9, including the ability to talk to Gemini using voice. There’even a new app for generating AI images, called Pixel Studio.
One of the biggest features is called Add Me, and it lets you virtually add the original photographer to a group photo by stitching together two images.
There is also a Video Boost update also includes several improvements, including 8K sampling and HDR Plus. This is only for Pro users and will arrive a little after the phone’s initial arrival.
The Google Pixel 9 series will arrive with Android 14 instead of Android 15, but there are still plenty of software improvements to be found here, especially when it comes to AI. There are at least ten new AI features that we are aware of, with some of the most exciting additions being Gemini Live, Add Me, and Pixel Studio.
Historically Google has announced new software and AI features but not all of them have rolled out right away. The good news is that most of the new features are arriving at launch, though at least a few won’t be ready at launch. With that in mind, let’s jump right in and take a brief look at some of the biggest new AI features on the Pixel 9 series.
Pixel 9 AI features that are ready from day one
Pixel 9 Pro
Credit: C. Scott Brown / Android Authority
Let’s start with all the Pixel 9 features that are live from day one. All of these are available for the entire Pixel 9 series unless otherwise indicated.
Magic Editor adds auto frame and Reimagine
There are two new Magic Editor features, both of which will at least be temporarily exclusive to the Pixel 9 family. The former automatically frames your selected target, even if that requires expanding the photo using AI. The second feature lets you swap out backgrounds to add fireworks, pink clouds, and more.
Google Keep Magic List
You can now talk to Gemini and have it make you a grocery or to-do list using Google Keep. You don’t even have to put specific list items, just say the meals, and it can do the rest. Obviously, how well this works will probably depend on how specific you get. We hope that you can even give it specific sites with the recipes you want, and it will do the work, but for now, that remains unclear until we have more hands-on time with the devices.
Gemini Live
You can now have live natural conversations with Gemini using Gemini Live, with your choice of ten different voices to pick from. While it will be available from day one, it is initially exclusive to Gemini Advanced users. We’ve tested Gemini Live out for ourselves and found it to be very impressive so far.
Pixel Screenshots
Credit: C. Scott Brown / Android Authority
Pixel Screenshots uses on-device AI to analyze all your screenshots. You can then ask Gemini questions and it can pull up information from the screenshots in the form of easily digestable answers.
Call Notes
Call Notes is built into the phone app and lets you record your calls. From there it will create a transcript and use Gemini to create a brief summary of the call. You can even search for these summaries and transcripts at any time in the future just by asking Gemini. This feature may not be available at launch in all regions, so your mileage will vary.
Pixel Studio
Pixel 9
Credit: C. Scott Brown / Android Authority
Pixel Studio is a brand new AI-powered app using Imagen 3. You can create new images through text prompts easily, but that’s not all. There’s even the ability to edit and modify these images after they are created. This allows you to better refine the image on the fly without having to completely generate a new one.
Pixel Weather
Credit: C. Scott Brown / Android Authority
This isn’t just a regular weather app, as it adds a few extra AI features to the mix including the ability to make AI weather summaries about the expected conditions and more. Pixel Weather is far from the most exciting addition to Google’s Pixel AI feature set, but it’s still a nice extra.
Add Me
Completed photo
Credit: C. Scott Brown / Android Authority
Add Me is arriving at launch but will initially be listed as Preview (beta) feature. Add Me lets one user take a group photo, and then another user swaps out while the first user takes the spot they would have occupied if they could have been in the image with everyone the first time. Gemini then takes these two images and stitches them together, making it look like the whole group was all present in the shot at once.
Pixel 9 AI features that won’t be ready until later
While most of the features above will be ready right away, it seems that a major update to Video Boost is on its way in the future, but won’t be ready for launch.
Credit: Google
Video Boost was introduced last year as a way to improve video quality, so it’s technically not new, but we’re counting it due to just how big an update this is. Rendering is now 2x faster, it works on zoom up to 20x, and there’s even support for AI 8K scaling. There’s also HDR Plus support in the works.
Google is rolling out a new floating overlay panel for Gemini on Android devices, featuring a subtle glow animation.
The panel allows Gemini responses to appear within the current app and enables a contextual understanding of on-screen content.
The update also includes a new “Ask about this video” chip that lets users ask questions about YouTube videos directly.
Google recently unveiled a series of exciting updates for its AI assistant, Gemini, during the Pixel 9 series launch event. While the introduction of Gemini Live mode stole the spotlight, Gemini is also getting a shiny new floating overlay panel for Android. (h/t: 9to5Google)
This new interface, currently being rolled out to Android users, features a visually pleasing glow animation that surrounds the panel whenever Gemini is activated. This subtle glow not only looks neat but is also a sign that your Gemini’s got a new trick up its sleeve: a contextual overlay that understands what you’re up to without taking over your whole screen.
This update was initially teased at the I/O 2024 conference in May and allows Gemini to deliver responses directly within the app you’re using rather than hijacking your entire screen. This design change aims to help users maintain focus on their current tasks while still benefiting from Gemini’s assistance. For those who prefer the more traditional, immersive experience, a quick tap on the top-right corner of the overlay will still expand it to full screen.
Additionally, the update also includes a new “Ask about this video” chip, replacing the previous “Ask about this screen” prompt, which appears when Gemini is triggered on YouTube videos. This feature allows users to request summaries or pose follow-up questions about the video’s content.
As for the glowing floating overlay, it’s still in the process of rolling out, so if you haven’t seen it yet, hang tight. Google says it’ll be hitting more Android devices in the coming weeks, both for regular Gemini users and those with Gemini Advanced subscriptions.
All in all, these updates are setting the stage for a more seamless and engaging Gemini experience. If you’re an Android user, keep an eye out for that glow.
Google's 'Help me write,' a tool that essentially started out as an AI suggestion feature for Gmail to help you complete common sentences, expanded to Chrome earlier this year and has evolved into a robust writing companion. Powered by Gemini, the tool's functionality includes writing suggestions and rewrites, with more significant updates rolling out now that will enhance its ability to polish and refine your email drafts.
I needed desk space badly as my desk of hobby/actual work was completely claimed by the A1 Mini and the AMS Lite doohicky sitting next to it. Together they were taking about three horizontal feet of desk space and I didn’t have six inches of desk I could see that wasn’t 3D printer related or not easily accessible.
I checked the options for compacting the printer and they were wall mount, which was rated “probably the best option” by several people I don’t know, and adding a riser to place the AMS directly over the A1 Mini.
Before I go too far into this story I’ll mention I’ve run two perfect prints and my table does not appear to be shaking around as much, but this may be hopeful thinking.
The print lasted somewhere in the neighborhood of 4 hours – I was out of the office, looked in on Bambu Studio, and there was a printed riser just hanging out living its best life. I got into work today and that was no longer the case – at some point after printing it decided it was going to detach from the plate and make a run for it.
No damage noted I set about removing the printed supports and installing it on the machine. It’s pretty evident what you need to do – remove a top of pole screw, when you remove said screw the top comes off, there’s a plate in there with 3 screws that can be removed with the tools that shipped with the printer, remove that and set the 3 screws aside, and get to screwing them in.
I unloaded all my spools from the AMS because I suspected it was going to be a pain to mount with the spools on, and proceeded to mount it with no real issues. The tubing looked like it was not going to work any more as it was now pretty darn high, but worked fine.
Loaded up, two perfect prints in and with about two feet of additional desk space I’m enjoying it.
I’ll update if I end up with any sub par prints in the next bit, but the added weight seems to have caused the unit to travel less.
Oh yeah, while I cannot find this at the time I’m writing this I ran across a video yesterday while looking for a solution that said that the main problem with this was not being able to access spools 3 & 4 easily. The unit with spools weighs something like 2 fat guinea pigs, just turn the unit if this is a concern.
Today was an interesting day – as I may have mentioned I’m printing up fast removable suite number signs as a work project using a Bambu A1 Mini. Today’s task was to get our logo and a quick left/right directory for an elevator in which you’re given a quick orientation for which way to go when you exit the elevator.
The difficulty was our logo’s font does not exist, it was designed by an artist sometime in the 80s or 90s and we have a couple of high resolution files but no vector graphics. So my challenge was take a high resolution image and turn it into a sign with directional indicators to be placed in an elevator.
I decided I was going to use MakerWorld’s Make My Sign (free) for making this thing which did everything I needed it to do except provide arrows and turn a PDF the size of Rhode Island into an SVG.
For the arrows I just googled “left arrow emoji” and “right arrow emoji” and cut and paste them in a text box because that looked perfect. Placed white text on a dark background and I had everything I needed except our logo.
The task of turning a PDF image into an SVG involved me cutting the logo in Windows using windows-shift-s and pasting it into an MSPaint document, saving as a PNG, then going to PNGtoSVG.com (also free, no registration required, no emailing of link,) and playing with simplifying the logo from multicolor to 1 or 2.
Downloaded the SVG, imported into Make My Sign, resized, positioned, and printed.
Now it’d be really cool if I showed you what I made, but I’m not entirely enthused at the prospect of broadcasting where I work to the world (you can find it easy enough,) so I’ll just throw in the image of the Pocketables printable logo I made while attempting to figure out all the steps required to make my project work.
Fun times. As a note I have printed several suite numbers with the removable contraption but this one was fun and made me a wee bit giddy printing up my company’s logo. Yeah I’m boring.
I’ve got my A1 Mini at work because 1) I’ve got a large work project I am doing on it 2) I have no space at home, and 3) every time that printer is printing I am sneezing. So I use it when I can be in another location.
I started a print on Friday with some brand new PLA from Bambu labs. I had printed a few things earlier in the day and had no problem but then one of the projects I downloaded from Maker World printed so weirdly I aborted it (globs, not sticking to the surface.) I was in a rush and closing down the software and accidentally chose to update preferences and now I get spaghetti.
Womp womp. The above spaghetti is off of a spool which was not the new spool and had been nothing but working prints until I accidentally updated something.
I highly suspect I managed to break the settings on a project, but yeah now I’m trying to figure out how to fix this. Fun time since it’s not at my house and I can’t clear the plate to fix until tomorrow.
So I now know spaghetti detection is not implemented yet on the A1 mini…
Oddly not seeing a lot of help when I’m searching this up other than delete a profile, log back into the program, and do not sync cloud profiles.
Will reveal the amazing solution when I find it. At a little over a month this is the first challenge I’ve faced made more of a challenge by being 8 miles away from me at the moment.
Fix appears to have been close Bambu Studio, open it, log out, log back in, do not sync cloud values and settings. I’m at 3/4ths of an SS Benchy with the new filament and no evident issues.
That said, the spaghetti I was printing up there appears to have been fine through about a quarter of the print and then the base was flung off the textured plate. I now have questions about whether this may be an issue of the print piece not being centered more than a bad setting.
But all appears well with the world at the moment… which is nice because I actually lost sleep trying to retrace my steps
Other possibility is a Dreo fan I recently reviewed was running at an odd number, may have been blowing on the unit and cooling the front of the plate down which is where all my fails seem to have occurred. I suspect Google Assistant misheard something and set it to Tornado.
The Banana Pi BPI-WiFi6 Mini is a tiny computer board designed for use as a DIY wireless router with support for open source software. It features the same processor and wireless chip found in the larger BPI-WiFi6 router that launched earlier this year. But, as the name suggests, the new “mini” model packs those components into […]
I know that I’m extremely late when it comes to the Palworld hype. Palworld released in early January 2024 and currently, since there aren’t a lot of updates dropping, the hype died out. Yet, the roadmap looks extremely promising. Since this game is still in early access, I’m always hesitant in writing about the game. Since, you never know which mechanics or things will change and evolve during the early access period. Especially since we are currently only at v0.1.5.1. So, I decided to hold off on my first impression/review article for now. But, I wanted to talk about this game. So, here are some things I’d love to see in the full version of Palworld or even in one of the next updates.
First of all, what is a Palworld?
Palworld is a combination of several games, all thrown into one. It’s easy to describe Palworld with saying which games it combines.
Foremost, at its core, it’s a game you can somewhat compare to Ark Survival Evolved. When I first started playing, I noticed the similarities right away. The way how you have a crafting system to build your base, and you have monsters running around you can tame/catch is totally here as well.
I haven’t played a lot of Ark, so I can’t say if this mechanic is also present in Ark… But, the fact you can use your monsters to preform tasks in your base reminded me quite a lot of The Survivalists. A game where you are stranded on an island, and you can train monkeys to preform actions for you. The big difference is that now certain monsters can only preform certain tasks, instead of the monkeys just copying you.
Now, a lot of other articles describe this game as Pokémon with guns. After playing this game, I think that’s a somewhat unfair comparison. When I think Pokémon, I think a journey with gyms and an evil team. I think turn based battles and a big ending tournament as its conclusion. While some Pals share a very similar design language to some Pokémon, most of the mechanics of what makes a Pokémon game aren’t in this game. The other big mechanic is the capture mechanic, but by now this isn’t exclusive to Pokémon games anymore. Especially since we have games like Coromon.
There are also influences of the latest Zelda games. Especially Tears of the Kingdom. There are these huge, strong bosses roaming around on the huge open world map, you can beat at any time. Also, the korok seeds to upgrade your character are also here in the form of effigies and Pal souls to upgrade your monsters.
This game really feels like the developers looked at all the games they liked playing, looked at what worked and threw it all together into one pot and shook it until it all clicked together. The mechanics of this game really click extremely well together. If I didn’t know any better, I wouldn’t be surprised if I said that this was a finished game.
There are some silly bugs here and there and in some spots, this game feels unfinished. But, overall the game we have currently is amazing and if you would enjoy a game like this after reading what I wrote here… Give this game a try. I only told you the most basic things in this game. This game is a survival game with elements from a lot of other games like those I have already mentioned but also Minecraft, Dragon Quest Builders and various others.
Let’s talk improvements
While the game is a lot of fun to play at the moment, there are some things I wish that were improved or updated. While you get a lot of warnings that there are: save corruption bugs, crashes and bugs… Besides sometimes the lighting engine giving up for a moment or the AI of the pals or enemies doing some funky stuff, I haven’t seen too many worrying things.
Take for example this floating rock screenshot here. I have explored roughly half of the map after 35 hours of play, and this was the only floating rock I could find. That’s extremely impressive, especially since if you look at the size of the map… It isn’t small at all. In the future, new islands and area’s will be added so if they deliver them with this kind of quality, I have no complaints.
Well, I do have one recommendation. I’d love to see more landmarks in the map. Currently, almost all the landmarks in the game are based upon the terrain. I’d love to see more villages or ruins of them. I loved finding these things in the Zelda games and letting my mind wonder what happened there. It’s a very difficult balance act, since too many landmarks would make the map feel crowded and limit the amount of possible base locations.
Basically, I’d love more reasons for me to go exploring these regions and get unique rewards. Besides completing the Paldex, there isn’t a lot of reason to go exploring in certain area’s. And when you have set up the right kind of farms and work stations for your pals in your bases, the chance you run out of resources is rather small.
Speaking about bases, currently you can only have three bases. Most likely, this is done to improve the multiplayer performance. Since, the game emulates the three bases in the background, so you can easily have a base close to big ore clusters to farm those while you are working in your other base. If you don’t place a cap on those, it’ll tank the performance of any computer or server. Yet, I’d give the tools to the player to increase the cap. Personally, I think Minecraft has one of the best systems with the game rules. You can almost change anything to your playstyle and even disable or remove caps that are there for performance reasons. You already have quite a lot of toggles and sliders in Palworld, but I’d also expand on that.
Currently, the building system is decent, but it needs a lot of polishing up. The biggest problem I have with the building system are the stairs. It’s a nightmare sometimes to place stairs to go from one to another floor. Also, why can’t we place a full wall next to stairs? Most likely because some pals their hitbox would do some crazy stuff? Also, placing certain items or crafting stations on elevated floors doesn’t always work the best.
On top of that, besides the visual look of the floors, walls and ceilings, what’s the point of being able to unlock stone foundations? As a test, I tried to build a high tower with wood and one with stone. I didn’t find a difference. I’d love to see more meaning to what I unlock. Granted, stone can’t burn down. So, if you have any raids with flamethrower or fire enemies, your base isn’t in danger. But what’s the difference between stone and metal, then?
I honestly think that it’d be a bad idea if PocketPair only created more content and not make the mechanics have more depth. For example, something I’d love to see them implementing in the pal task system is a mechanic I love in Cult of the Lamb. When you welcome a new member in your cult, you can set the main focus of a member there. I’d love to see that you are able to set the main focus per pal. For example, when things are damaged in the base after a raid, you select one pal to go and get the repair kits and repair all damages first before going to do their usual tasks. Or when you have a pal that can do multiple things, and you mostly need that pal to pick up items, you could disable their other abilities. Maybe we need some items for that and those items can be only found in the wild, and we need to hunt for them. That’s an interesting idea to lure players out of their bases.
Dreaming like a madman
There are some UI elements I’d love to see change as well. First, I’d love to have a mini-map. The compass at the top of the screen only shows things in roughly 300m range, and that’s too short. Especially since in some areas the warp spots are spread quite far apart. A mini-map where you can pin certain things like the nearest warp spot would be amazing.
Secondly, in terms of the weapons. I’d love to see their stats before I craft them. Now it’s a guessing game that if I craft a certain weapon, if it’s going to be stronger or better than the one I currently have. It’d also be amazing if the durability is shown outside the inventory as well. There is some space in the UI element at the bottom right, so why not show it?
Now, in terms of the inventory. Sometimes I have issues with combining stacks of items. Sometimes I have to do it twice or thrice before they are combined. This is a rather small things, but outside of those… I don’t have a lot of small quality of life things that I could recommend. Maybe that if you sleep during the day in your bed, you can skip to night if you are hunting the nightlife pals?
Maybe there is one quality of life feature I think would be amazing. Quite often, when I’m hunting the stronger pals, I have my pal instructions set on “Focus on the same enemy”. I think it would have an amazing option if you have a feature where you are able to say to your pal if they are allowed to kill the wild pal or not. Since, if they are dead, you can’t capture them.
There is one attack that is a double-edged sword. In the Pokémon games, you have the self-destruct attack. You also have this one in Palworld, yet, some wild pals with this attack always take it over their other attacks. These bee pals always swarm me and instead of being able to weaken them, so I can catch them, I just get blown up. You barely have a chance to do any damage to weaken them to make capturing possible.
I just remembered one other quality of life feature. You can create saddles, gloves and other things to do special things with your pals. But, why I am allowed to create more than one? I mean, I can only use one of them at a time and they can be interchanged. So, if you make a saddle for a certain bird pal e.g. Helzephyr, that means you can use it on all Helzephyrs you catch.
But the biggest quality of life feature PocketPair could add is a mini-map inside caves/dungeons. The times I almost got lost in these caves is insane. Especially since there are only a handful of rooms in these and it’s easy to get turned around and confused.
Now, to completely change the subject… I wish there was more music in the game. The soundtrack in this game is amazing. Sadly, there isn’t enough in the game at the moment, so there are some silent moments. A little bit more ambient tracks would go a really long way in this game.
But, I saved one of my biggest things for last. That is inventory management. This is a total pain in bases. Since pals can put things inside chests, you can forget organisation. Thankfully, while crafting, the game pulls from all resources in your base, but if you need a certain item to use at another base, have fun to go searching through all your chests. What I usually do, if I can, is start crafting an item with the one I want to move and then cancel it. Since it drops the resources then and there. Now, how to solve this without breaking the game and the idea that pals can put things in chests? What if you have a new skill that pals can have? An organisation skill and depending on the level, they either put red things with red things or make a weapons chest and come to complain to you when there aren’t enough chests for their organisation?
If your base is fully set up, the proper of a lack of depth starts to show as well. When you build your base, why should you return to it besides needing to craft or repair your weapons? Give us some activities we can do in our base when we build them. I mean come on, we even have the amusement furniture set. If only we could play some mini-games with our pals to increase their sanity for example? Since currently, there is not a lot you can do when a pal is stressed.
Of course, a certain balance needs to be maintained. The more things a pal can interact with, the more chance you have to create lag or overwhelm the player. Also, the more depth you create, the more things you have to maintain and maybe that’s not the type of game that PocketPair wants to create. How I currently see Palworld is a playground sandbox in the schoolyard. It’s an amazing playground where you can make your own fun but it’s only part of the schoolyard and only has a swing, some monkey bars, a climb rack, a small castle and a slide. It’s all solid built and amazing to spend your time in… But, then you notice the potential this sandbox has to grow. What if we enlarged that sandbox with another castle, so the multiplayer can be player vs player as well? Or wait, why don’t we add an underground to that sandbox?
All I’m saying is that currently Palworld has an amazing foundation. The biggest issue at the moment is that the game lacks depth. While the current roadmap has a lot of expansions and more content, I hope PocketPair doesn’t forget to also make it more than just surface level. For example, imagne that the raid bosses can be captured and barely have an unique skills. Why should the player do the raids then? What reward do you get out of it? Not all mechanics can be fun because they are enjoyable to do. Players will get bored and they will look for a way to spice things up or to challenge themselves.
Now what that said, I’m going to close off this article. I’m quite excited for the future of Palworld and I’m going to wait a few more updates before I decide to write a review on the game. But overall, I’m really liking what I see. The basis of an amazing title is here already and I think we are going to get an even better game when this comes out of early access. Let’s wait and see what happens when the first big updates drop. Especially the raid bosses that got teased a few weeks ago.
Thank you so much for reading this article and I hope you enjoyed reading it as much as I enjoyed writing it. What do you think of PalWorld and what should be added or changed? Let me know in the comment section down below. Also, what do you think of my idea’s? I’m curious, feel free to leave a comment about that one to. But, I also hope to welcome you in another article but until then, have a great rest of your day and take care.
Today was an interesting day – as I may have mentioned I’m printing up fast removable suite number signs as a work project using a Bambu A1 Mini. Today’s task was to get our logo and a quick left/right directory for an elevator in which you’re given a quick orientation for which way to go when you exit the elevator.
The difficulty was our logo’s font does not exist, it was designed by an artist sometime in the 80s or 90s and we have a couple of high resolution files but no vector graphics. So my challenge was take a high resolution image and turn it into a sign with directional indicators to be placed in an elevator.
I decided I was going to use MakerWorld’s Make My Sign (free) for making this thing which did everything I needed it to do except provide arrows and turn a PDF the size of Rhode Island into an SVG.
For the arrows I just googled “left arrow emoji” and “right arrow emoji” and cut and paste them in a text box because that looked perfect. Placed white text on a dark background and I had everything I needed except our logo.
The task of turning a PDF image into an SVG involved me cutting the logo in Windows using windows-shift-s and pasting it into an MSPaint document, saving as a PNG, then going to PNGtoSVG.com (also free, no registration required, no emailing of link,) and playing with simplifying the logo from multicolor to 1 or 2.
Downloaded the SVG, imported into Make My Sign, resized, positioned, and printed.
Now it’d be really cool if I showed you what I made, but I’m not entirely enthused at the prospect of broadcasting where I work to the world (you can find it easy enough,) so I’ll just throw in the image of the Pocketables printable logo I made while attempting to figure out all the steps required to make my project work.
Fun times. As a note I have printed several suite numbers with the removable contraption but this one was fun and made me a wee bit giddy printing up my company’s logo. Yeah I’m boring.
I’ve got my A1 Mini at work because 1) I’ve got a large work project I am doing on it 2) I have no space at home, and 3) every time that printer is printing I am sneezing. So I use it when I can be in another location.
I started a print on Friday with some brand new PLA from Bambu labs. I had printed a few things earlier in the day and had no problem but then one of the projects I downloaded from Maker World printed so weirdly I aborted it (globs, not sticking to the surface.) I was in a rush and closing down the software and accidentally chose to update preferences and now I get spaghetti.
Womp womp. The above spaghetti is off of a spool which was not the new spool and had been nothing but working prints until I accidentally updated something.
I highly suspect I managed to break the settings on a project, but yeah now I’m trying to figure out how to fix this. Fun time since it’s not at my house and I can’t clear the plate to fix until tomorrow.
So I now know spaghetti detection is not implemented yet on the A1 mini…
Oddly not seeing a lot of help when I’m searching this up other than delete a profile, log back into the program, and do not sync cloud profiles.
Will reveal the amazing solution when I find it. At a little over a month this is the first challenge I’ve faced made more of a challenge by being 8 miles away from me at the moment.
Fix appears to have been close Bambu Studio, open it, log out, log back in, do not sync cloud values and settings. I’m at 3/4ths of an SS Benchy with the new filament and no evident issues.
That said, the spaghetti I was printing up there appears to have been fine through about a quarter of the print and then the base was flung off the textured plate. I now have questions about whether this may be an issue of the print piece not being centered more than a bad setting.
But all appears well with the world at the moment… which is nice because I actually lost sleep trying to retrace my steps
Other possibility is a Dreo fan I recently reviewed was running at an odd number, may have been blowing on the unit and cooling the front of the plate down which is where all my fails seem to have occurred. I suspect Google Assistant misheard something and set it to Tornado.
Just as smartphone hardware and processing power reached maturity, tech giants have found a new way to sell you an upgrade: AI. Even Apple has jumped on the bandwagon as it gears up to launch a revamped Siri and Apple Intelligence via an iOS 18 update later in 2024. Google, meanwhile, has pushed Android brands to adopt its Gemini family of language models for everything from image editing to translation since last year. But while Apple Intelligence and Google Gemini may look similar on paper, the two couldn’t be more different in reality.
Even though Apple tends to take a more conservative approach when adopting new technologies, its AI push has been swift and comprehensive. With that in mind, let’s break down how Apple Intelligence differs vs Google Gemini and why it matters.
Apple Intelligence vs Google Gemini: Overview
The biggest difference between Apple Intelligence and Gemini is that Apple Intelligence is not anchored to any single app or function. Instead, it refers to a wide variety of features available across the iPhone, iPad, and Mac. In other words, Apple has made AI as invisible as possible — you may not even realize its presence outside of certain obvious instances like Siri.
On the other hand, Gemini started its life as a chatbot to compete with the likes of ChatGPT and has gone on to replace the Google Assistant. Even though Gemini’s capabilities extend beyond chat, features like text summarization and translation can vary depending on your smartphone of choice. For example, Samsung’s Galaxy AI offers a different set of AI features than those found on Google Pixel devices, even though both companies use (and advertise) Gemini Nano.
Gemini's feature set differs from one device to another, while Apple Intelligence is standard.
While introducing Apple Intelligence, the Cupertino giant also made a big deal about its commitment to privacy. Cloud-based AI tasks will be performed strictly on Apple’s servers on the company’s own hardware. More importantly, human-AI interactions will not be visible to anyone besides the user, not even to Apple.
However, the platform isn’t entirely closed off either; Apple has announced a ChatGPT integration for complex Siri queries. Rumors also indicated that the company may offer responses from Gemini as an alternative to ChatGPT. This willingness to include third-party integrations marks a significant shift for Apple, which traditionally prefers to keep its ecosystem tightly controlled. However, it offers Apple a way to offload blame if the AI model responds with unsafe or misleading information.
Are Apple Intelligence and Gemini free?
Yes, both Apple Intelligence and Gemini are free to use. However, Google offers an optional paid tier called Gemini Advanced that unlocks higher quality responses, thanks to a larger language model. While Apple doesn’t directly charge for its AI features, you can link a ChatGPT Plus account to access paid features like access to the latest GPT-4o model.
Apple Intelligence vs Google Gemini: Features compared
Credit: Rita El Khoury / Android Authority
Both Apple Intelligence and Gemini power a slew of AI features, but they differ slightly in terms of their implementation and availability. Here’s a quick rundown:
Assistant: Using the power of large language models, Apple’s Siri and Google’s Gemini can both act as capable digital assistants and answer any question under the sun. Based on Apple’s demos, the new Siri has a clear advantage as it can coordinate actions across apps. For example, you can ask it to send photos from a specific location to your contact without opening either the Photos or Messages app. Gemini cannot perform this kind of functionality yet.
Screen context and personalization: Apple Intelligence can access information on your screen before responding. It can also access texts, reminders, and other data across Apple apps in the background. With Gemini, you have to manually tap “Add this screen” each time to let the AI read it.
Photo editing: Google uses Imagen 2 instead of Gemini for image-related tasks but it’s still surprisingly capable — Magic Editor in Google Photos can remove objects, replace the sky, and more. Samsung also uses the same model for its Photo Edit feature. Apple Intelligence adds an object cleanup tool to the Photos app but it does not offer as many AI editing options as Magic Editor.
AI image generator: Apple’s Image Playground is a new app that creates AI-generated images and emoji, either based on your contacts or custom descriptions. These can be easily dropped into chat apps. Gemini can generate images too, but only via typed prompts.
Mail and productivity: While you can find Gemini in many Google apps these days, the majority of features are unfortunately locked behind Gemini Advanced. Help Me Write in Gmail and Google Docs, for example, won’t appear without the subscription. Apple’s Mail app, on the other hand, will summarize your emails using an on-device model. A feature called Smart Reply will also generate a reply on your behalf after asking relevant questions based on the incoming email.
Writing tools: Apple leads in this area as you can select any piece of text across the operating system and perform AI language tasks like proofreading, summarization, and paraphrasing. Galaxy AI offers similar tools via Samsung Keyboard’s Gemini Nano integration but you won’t find it on every Android phone. In fact, it’s even missing on Google’s Pixel devices and Gboard.
Apple Intelligence and Gemini: Supported devices and availability
Credit: Robert Triggs / Android Authority
Since many Apple Intelligence features utilize an on-device large language model, we knew that Apple would only bring it to relatively modern devices. However, the company has gone further than many expected and locked the entire suite to the iPhone 15 Pro series. That’s right — the regular iPhone 15 series (and earlier models) will not support Apple Intelligence, even the parts that rely entirely on the cloud.
Google, meanwhile, has done a commendable job of bringing Gemini to as many Android devices as possible. The chatbot is available on every single Android smartphone, for instance, and even features powered by the on-device Gemini Nano model is available on more devices like the Pixel 8a.
Apple Intelligence won't be available to the vast majority of users for the foreseeable future.
Over on the computing side, Apple is more generous as it will deliver AI features to all macOS devices dating back to the M1 chip from 2020. Meanwhile, Google offers limited Gemini features in ChromeOS and you can only use it on newer Chromebook Plus machines. That said, the Gemini chatbot is accessible via a web browser on any computer.
Availability is another sore spot for Apple Intelligence. It will not launch in the UK, European Union, and China. You will also need to set your device to the “US English” locale. While these restrictions may be relaxed at some point, Gemini is far ahead of the curve as it supports all major languages and regions.
Apple Intelligence vs Google Gemini: Privacy
Credit: Apple
Adding AI to anything is risky — the technology is prone to hallucinating and generating misleading information that could ruin a brand’s reputation. Just ask Google; the company faced backlash over its AI Overviews feature in search and has since walked it back. Another risk is data privacy — nobody wants to share sensitive data only to have it leaked or used to train future AI.
Apple has countered this problem by using a model small enough to power many AI features entirely on-device. It also employs a strategy called Private Cloud Compute, which works as follows:
When a user makes a request, Apple Intelligence analyzes whether it can be processed on device. If it needs greater computational capacity, it can draw on Private Cloud Compute, which will send only the data that is relevant to the task to be processed on Apple silicon servers. When requests are routed to Private Cloud Compute, data is not stored or made accessible to Apple, and is only used to fulfill the user’s requests.
Gemini, meanwhile, also comes in multiple variations. The smallest language model, Gemini Nano, runs on your device even and is the most private option. We have a list of every Gemini Nano-powered feature on Pixel devices, and Galaxy AI has a similar feature set too.
For more complex tasks, however, you will need to use Google’s cloud-based Gemini models. And unsurprisingly, the search giant’s privacy policies aren’t as user-friendly — Gemini interactions are not only stored on Google’s servers, but they are also used to train and improve future language models. In other words, it’s the complete opposite of what Apple offers and you may want to avoid sharing sensitive information with Google’s chatbot. You can opt out of AI model training on Gemini but that comes at the cost of losing access to chat history and Extensions that allow the chatbot to access data from Gmail and other Google apps.
Overall, both AI platforms trade blows in terms of features, at least on paper. However, Apple Intelligence’s deep OS-level integration makes it far more useful day-to-day than Gemini. The only downside is that you will need the latest iPhone — and the Pro model at that. Google and Samsung may not offer the same depth, but they have done a remarkable job of bringing AI to older or less expensive devices.
Gemini Nano-powered scam call detection is on its way, and the Phone app is showing early evidence.
Phone will start differentiating between spam calls and scam calls.
In addition to automatically detecting scam, users may be able to manually report suspicious calls.
Update, August 1, 2024 (08:15 AM ET): We have managed to activate some of the UX of the upcoming feature.
Credit: Assemble Debug / Android Authority
As you can see, the user will be asked to choose between reporting the call as spam or as a scam.
Original article, July 25, 2024 (12:19 PM ET): Are you sold yet on the potential of AI? Smartphone features powered by AI feel like the only thing manufacturers are talking about anymore, but how many of those are actually useful tools you’re interested in, and how many seem more like fancy tech demos? Even if you’re still waiting for that killer app, we’ve got reason to be optimistic, and have heard about a few compelling projects in the works, like Google using Gemini Nano to keep you safe from scammers on voice calls. As we wait to get full details from Google on how that will arrive, we’re already seeing some early evidence of it in the Phone app.
Opening up the new Google Phone 138 beta release, we spot a number of text strings that sound related to this incoming functionality:
<string name="report_call_as_scam_action">Report call as scam</string>
<string name="report_call_as_scam_details">Unknown callers asking for your personal, financial, or device info</string>
<string name="report_call_as_spam_action">Report call as spam</string>
<string name="report_call_as_spam_details">Nuisance calls, irrelevant or unsolicited promotions, offers, etc.</string>
<string name="block_or_report_details">Information reported will only be used by Google to improve spam & scam detection.</string>
The first takeaway there is the distinction being drawn between scams and spam; right now, the app’s formal focus is only on spam (though we could see scams counting as “unwanted calls” in general). But going forward, Google Phone is preparing to be explicit about the difference.
We also notice that this seems to describe a system for manually reporting calls. The way Google talked about it back at I/O, it sounded like Gemini would be making the decision about characterizing the call as legitimate or not, so it’s interesting to consider that we may also be able to flag calls that Google misses.
“Sharpie,” if you haven’t surmised just yet, is Google’s internal codename for this system. All these functions and variables present in the app code shine further light on how Phone’s AI scam detection will work and arrive. The first two you see here appear to correspond with the user interaction options Google presented back when announcing the feature in May:
Even though all the processing for scam detection will take place on your phone, limiting the privacy implications of using it, Google’s been clear that it’s not forcing this on anyone, and the system will be opt-in when it arrives. That has us raising our eyebrows just the slightest bit at what appears to be a flag for automatically using AI scam detection — perhaps it’s intended for managed devices, or for use with Family Link?
While we still have plenty of questions about how Phone’s scam detection will arrive, and exactly how it will operate once it does, we’re hugely excited about the idea of it getting here. Scam calls are a serious problem affecting some of society’s most vulnerable members, and it’s not always easy to teach people how to recognize when they’re being taken advantage of. If we can offload some of that burden onto AI-powered systems, there’s the real potential to help protect a lot of people.
Hopefully the progress we’ve spotted indicates that Google is well on its way to getting this system running. Will it debut with the Pixel 9 in just a few more weeks? We’ll know soon!
We’ve managed to get an early look at Gemini’s upcoming Google Keep, Tasks, and Calendar extensions.
The Google Keep extension will let you create new notes and lists, add information to notes, and edit existing lists.
The Tasks and Calendar extensions will let you create new tasks and events and view existing tasks and events.
Gemini will soon get a host of new extensions that will enable integration with various Google services. We recently shared details about a few of the upcoming extensions, and we’ve now managed to get an early look at the Keep, Tasks, and Calendar extensions ahead of the rollout.
As you can see in the following video, the Google Keep extension allows you to ask Gemini to create new notes and lists, add information to notes, and add or remove items from lists.
The Google Tasks extension, on the other hand, lets you use Gemini to create new tasks, such as reminders. It also lets you easily view existing tasks and show their due dates.
Finally, the Google Calendar extension lets you create new calendar events, view all upcoming calendar events or ones on a specific date, and edit calendar events.
Google first teased these extensions at I/O earlier this year in May, and they finally seem ready for rollout. The extensions will likely let users do more than what we’ve shown in the videos above, but we’ll have to wait until the official release to get a complete picture of the extensions’ capabilities.
A teardown of the Google app has revealed a Spotify extension for Gemini.
We were able to activate the extension and play music from Spotify via the Google chatbot.
Gemini offers support for extensions, allowing you to use other Google and third-party apps within the voice assistant. The company currently offers a YouTube Music extension, but what if you use a different music streaming platform?
An Android Authority teardown of the Google app (version 15.30.27.29) has revealed that a Spotify extension for Gemini is in the works. A description of the extension (seen in the first image below) suggests that this will let you play both music and podcasts.
We were able to get the feature working on our phone, asking Gemini to play a song via Spotify. Gemini briefly shows a YouTube Music info card after processing the request, but the song indeed plays via Spotify. It’s also worth noting that the chatbot can play music via Spotify in the background instead of launching the app. Check out our video below for a better idea of how this extension works.
Nevertheless, this is good news as there are hundreds of millions of Spotify users out there. So you don’t have to sign up for YouTube Music Premium if you want Gemini integration on your Android device.
This isn’t the only upcoming Gemini extension we’ve spotted in recent days. We discovered that several more unannounced Gemini extensions are in the works, including Google Home, the Phone app, and Utilities. This is all encouraging news if you thought Gemini could use more comprehensive integration with other apps and services.
Today was an interesting day – as I may have mentioned I’m printing up fast removable suite number signs as a work project using a Bambu A1 Mini. Today’s task was to get our logo and a quick left/right directory for an elevator in which you’re given a quick orientation for which way to go when you exit the elevator.
The difficulty was our logo’s font does not exist, it was designed by an artist sometime in the 80s or 90s and we have a couple of high resolution files but no vector graphics. So my challenge was take a high resolution image and turn it into a sign with directional indicators to be placed in an elevator.
I decided I was going to use MakerWorld’s Make My Sign (free) for making this thing which did everything I needed it to do except provide arrows and turn a PDF the size of Rhode Island into an SVG.
For the arrows I just googled “left arrow emoji” and “right arrow emoji” and cut and paste them in a text box because that looked perfect. Placed white text on a dark background and I had everything I needed except our logo.
The task of turning a PDF image into an SVG involved me cutting the logo in Windows using windows-shift-s and pasting it into an MSPaint document, saving as a PNG, then going to PNGtoSVG.com (also free, no registration required, no emailing of link,) and playing with simplifying the logo from multicolor to 1 or 2.
Downloaded the SVG, imported into Make My Sign, resized, positioned, and printed.
Now it’d be really cool if I showed you what I made, but I’m not entirely enthused at the prospect of broadcasting where I work to the world (you can find it easy enough,) so I’ll just throw in the image of the Pocketables printable logo I made while attempting to figure out all the steps required to make my project work.
Fun times. As a note I have printed several suite numbers with the removable contraption but this one was fun and made me a wee bit giddy printing up my company’s logo. Yeah I’m boring.
I’ve got my A1 Mini at work because 1) I’ve got a large work project I am doing on it 2) I have no space at home, and 3) every time that printer is printing I am sneezing. So I use it when I can be in another location.
I started a print on Friday with some brand new PLA from Bambu labs. I had printed a few things earlier in the day and had no problem but then one of the projects I downloaded from Maker World printed so weirdly I aborted it (globs, not sticking to the surface.) I was in a rush and closing down the software and accidentally chose to update preferences and now I get spaghetti.
Womp womp. The above spaghetti is off of a spool which was not the new spool and had been nothing but working prints until I accidentally updated something.
I highly suspect I managed to break the settings on a project, but yeah now I’m trying to figure out how to fix this. Fun time since it’s not at my house and I can’t clear the plate to fix until tomorrow.
So I now know spaghetti detection is not implemented yet on the A1 mini…
Oddly not seeing a lot of help when I’m searching this up other than delete a profile, log back into the program, and do not sync cloud profiles.
Will reveal the amazing solution when I find it. At a little over a month this is the first challenge I’ve faced made more of a challenge by being 8 miles away from me at the moment.
Fix appears to have been close Bambu Studio, open it, log out, log back in, do not sync cloud values and settings. I’m at 3/4ths of an SS Benchy with the new filament and no evident issues.
That said, the spaghetti I was printing up there appears to have been fine through about a quarter of the print and then the base was flung off the textured plate. I now have questions about whether this may be an issue of the print piece not being centered more than a bad setting.
But all appears well with the world at the moment… which is nice because I actually lost sleep trying to retrace my steps
Other possibility is a Dreo fan I recently reviewed was running at an odd number, may have been blowing on the unit and cooling the front of the plate down which is where all my fails seem to have occurred. I suspect Google Assistant misheard something and set it to Tornado.
Have you seen Google's "Dear Sydney" ad? The one where a young girl wants to write a fan letter to Olympic hurdler Sydney McLaughlin-Levrone? To which the girl's dad responds that he is "pretty good with words but this has to be just right"? And so, to be just right, he suggests that the daughter get Google's Gemini AI to write a first draft of the letter?
If you're watching the Olympics, you have undoubtedly seen it—because the ad has been everywhere. Until today. After a string of negative commentary about the ad's dystopian implications, Google has pulled the "Dear Sydney" ad from TV. In a statement to The Hollywood Reporter, the company said, "While the ad tested well before airing, given the feedback, we have decided to phase the ad out of our Olympics rotation."
The backlash was similar to that against Apple's recent ad in which an enormous hydraulic press crushed TVs, musical instruments, record players, paint cans, sculptures, and even emoji into… the newest model of the iPad. Apple apparently wanted to show just how much creative and entertainment potential the iPad held; critics read the ad as a warning image about the destruction of human creativity in a technological age. Apple apologized soon after.
Google says Gemini is coming to teens with educational accounts in the coming months.
The chatbot will be available in English to teens with school accounts in over 100 countries.
Google already offers Gemini to teenagers using their personal accounts, but teens weren’t able to use the chatbot with their educational accounts.
Now, Google has announced that Gemini is coming to teenagers via their school-issued accounts in the “coming months.” The company added that this option will be available in English in over 100 countries.
Gemini on mobile changes its answer mid-sentence when asked whether Google is unethical.
The chatbot begins to answer affirmatively before it replaces this response with a non-answer.
Gemini on the web gives a more comprehensive response about Google’s ethical concerns, though.
There’s no shortage of concerns about Google’s ethics, ranging from its privacy issues to YouTube’s promotion of toxic videos. However, it looks like Google might be barring Gemini on mobile from answering questions about its parent company’s ethics.
Athenil noticed that Gemini on mobile abruptly changed its answer when asked whether Google had done anything unethical. We were able to reproduce this on our own phone — check out our video below.
Gemini, the Assistant replacement, was offered to me today and I fell for it. I quickly ran it through a list of things I do on a regular basis and for many it worked fine but for my driving offerings it failed to the point I am going to have to switch while I’m in the car.
Update: this has evidently been around a couple of weeks, it was new to me to be bugged to switch.
The main issue is there is no YT Music integration at the moment, but it will pull a list of YouTube videos that you can select, which is essentially useless when driving. What’s worse, for me at least, is that asking it to play the news results in nothing. It can’t, and there’s no way to ask it to ask Assistant to play the news.
Yeah, I like the news and music in the car… this kills that. Or at least it does for me as of the time of writing… betting they fix the music integration pretty quickly as they already have other integration like Google Maps and “navigate home” still works… it’ll just be a long and quiet ride home.
Now it seems to do well at your standard LLM responses, but it does not do continued conversation so I find it asking me a lot of questions and then finding out my spoken answers have gone nowhere.
I suspect this will get better quite quickly, however being Google I suspect the things I use most will be the very last to start working.
Gemini also seems to suffer some identity issues as it believes it’s Assistant in some replies. The replies in general are much more expansive than Google Assistant had and asking follow up questions is quite useful, except I have to say “hey Google” for every follow up because continued conversation doesn’t seem to be a thing here.
Minor issues to something that actually looks like it could be amazing… but I do have to switch it for driving, and there’s no automatic way to switch from Assistant to Gemini or back at the moment. Actually I see no way to switch out at this point and stuck with Gemini telling me conflicting things for changing Assistant, which doesn’t seem to work… womp womp.
Immediately after I wrote this and had received multiple wrong replies it gave me the correct info which is open Gemini, press your profile pic, go to settings, down at the bottom is Digital Assistants from Google, press that and you can switch between Gemini and Google Assistant, which isn’t that hard but you’ll find that Gemini is simply gone from your phone after that.
Not terribly hard to re-install it from the play store, and it appears to survive the next switch, but not exactly smooth.
I am completely out of the loop today for a variety of car related reasons, so I suspect this has something to do with all the Google announcements that I managed to miss and am catching up late in the day on.
On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window.
So far, people outside of Anthropic seem impressed. "This model is really, really good," wrote independent AI researcher Simon Willison on X. "I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump)."
As we've written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).
Post-keynote at WWDC24, key Apple executives confirmed that the company wants Google’s Gemini on iPhones.
In fact, the company wants all manners of AI systems powered by large language models.
For now, only OpenAI is an official partner with Apple, which brings ChatGPT to the Apple Intelligence system.
At WWDC24 today, Apple took the wraps off Apple Intelligence, the company’s new umbrella name for the various generative AI tools that will soon come to iPhones, iPads, and Mac computers. To make at least some of these AI features a reality, Apple is relying on a partnership with OpenAI, creators of ChatGPT.
However, weeks before the WWDC24 keynote, we heard rumors that Apple was also in talks with Google about bringing that company’s Gemini to iPhones. That didn’t appear to pan out, but that doesn’t mean the deal is off the table.
Gemini, the Assistant replacement, was offered to me today and I fell for it. I quickly ran it through a list of things I do on a regular basis and for many it worked fine but for my driving offerings it failed to the point I am going to have to switch while I’m in the car.
Update: this has evidently been around a couple of weeks, it was new to me to be bugged to switch.
The main issue is there is no YT Music integration at the moment, but it will pull a list of YouTube videos that you can select, which is essentially useless when driving. What’s worse, for me at least, is that asking it to play the news results in nothing. It can’t, and there’s no way to ask it to ask Assistant to play the news.
Yeah, I like the news and music in the car… this kills that. Or at least it does for me as of the time of writing… betting they fix the music integration pretty quickly as they already have other integration like Google Maps and “navigate home” still works… it’ll just be a long and quiet ride home.
Now it seems to do well at your standard LLM responses, but it does not do continued conversation so I find it asking me a lot of questions and then finding out my spoken answers have gone nowhere.
I suspect this will get better quite quickly, however being Google I suspect the things I use most will be the very last to start working.
Gemini also seems to suffer some identity issues as it believes it’s Assistant in some replies. The replies in general are much more expansive than Google Assistant had and asking follow up questions is quite useful, except I have to say “hey Google” for every follow up because continued conversation doesn’t seem to be a thing here.
Minor issues to something that actually looks like it could be amazing… but I do have to switch it for driving, and there’s no automatic way to switch from Assistant to Gemini or back at the moment. Actually I see no way to switch out at this point and stuck with Gemini telling me conflicting things for changing Assistant, which doesn’t seem to work… womp womp.
Immediately after I wrote this and had received multiple wrong replies it gave me the correct info which is open Gemini, press your profile pic, go to settings, down at the bottom is Digital Assistants from Google, press that and you can switch between Gemini and Google Assistant, which isn’t that hard but you’ll find that Gemini is simply gone from your phone after that.
Not terribly hard to re-install it from the play store, and it appears to survive the next switch, but not exactly smooth.
I am completely out of the loop today for a variety of car related reasons, so I suspect this has something to do with all the Google announcements that I managed to miss and am catching up late in the day on.
The ONEXPLAYER X1 Mini is a handheld gaming PC with an 8.8 inch, 2560 x 1600 pixel LTPS display featuring a 144 Hz refresh rate, and AMD Ryzen 7 8840U processor with Radeon 780M integrated graphics, and a pair of detachable controllers that let you quickly switch between using the computer as a handheld or a […]
Google had confirmed that Gemini Nano would be coming to the Google Pixel 8 and Pixel 8a.
The Android AICore app update with the toggle responsible for enabling Gemini Nano is now rolling out for the Pixel 8, but it hasn’t been spotted on the Pixel 8a yet.
Enabling the setting doesn’t immediately begin downloading the Gemini Nano module just yet, though, so you’ll likely have to still wait for a server-side rollout.
Google’s Gemini Nano is an AI model that works on-device to execute AI tasks. It’s the smallest AI model in the Gemini family, but it is very important for all the internet-free, on-device AI capabilities it brings to a smartphone. Google’s Pixel 8 Pro is the only Pixel that can use Gemini Nano for Pixel AI capabilities, but the company caved in after an uproar to bring Gemini Nano to the Pixel 8 and Pixel 8a, too. Just yesterday, we spotted the toggle for enabling on-device generative AI features within the Android AICore app, and now we can confirm that the Android AICore app update is rolling to the Pixel 8 at least.
My colleague Adamya Sharma received the Android AICore app update on her Pixel 8, which includes the toggle. Curiously, the Pixel 8 Pro app was also updated, but there is no toggle in it. We couldn’t locate the Android AICore app for the Pixel 8a, though.
As you can see, the Android AICore app versions for the Pixel 8 and Pixel 8 Pro are different. The Pixel 8 Pro already has Gemini Nano features, and a toggle like this would allow users to disable AI features, which isn’t possible currently. On the Pixel 8 and the Pixel 8a, this toggle will allow users to enable AI features, as the phones lack them out of the box. The toggle isn’t enabled by default on these two phones.
Google will likely bring Gemini Nano support to the Pixel 8 and Pixel 8a in a future Feature Drop, so toggling this setting right now on the Pixel 8 doesn’t immediately begin downloading the Gemini Nano module. When the feature is rolled out, users will need to activate Developer Options and then toggle this new “Enabled on-device GenAI features” setting, which is present at Settings > System > Developer options > AICore Settings.
Did you receive the Android AICore app update on your Pixel smartphone? Did the toggle begin downloading Gemini Nano on your Pixel 8 or Pixel 8a? Let us know in the comments below!
Yesterday, we shared with you a preview of what you can do with Google’s new Gemini-powered “Ask This Page” feature, which was announced at I/O 2024. Today we’re getting our hands on another upcoming “Ask This…” feature, the one that works on YouTube videos.
Just like yesterday, this is an early hands-on preview with Ask This Video. The feature is not live yet, but Android Authority managed to activate it in the Google app. So, while we tried to push it a bit and see what it can do and where it might fail, there could still be room for improvement before Google launches it to the public.
Gemini Ask This Video: What it is and how it works
Credit: Rita El Khoury / Android Authority
Ask This Video is an upcoming Gemini-powered generative AI feature that helps you ask questions about any YouTube video you’re currently watching. Instead of scrubbing and skipping through different parts of that video to find a specific bit of information, you’ll be able to query Gemini and it’ll try to find the answer in that video, without coloring outside the lines. In theory, this should be a big time-saver if you’re looking for a specific information in a YouTube video and you don’t want to waste time trying to find it.
To activate Ask This Video, you just tap and hold the power button to pull up Gemini on your Android phone while watching a YouTube video. Gemini is context-aware now, so it’ll know you triggered it in YouTube and surface an “Ask this video” chip on top of the pop-up menu. See the image above for reference.
Tap that and you’ll notice that Gemini has now attached the video to the pop-up, so you can start typing questions in your natural language and Google’s AI will try to find answers. It takes about 6-8 seconds for Gemini to process the request and come back with an answer.
Ask This Page understands nuance sometimes
In the example above, you can see we asked Gemini about Android Authority‘s “Pixel 8a is here, but why” video where my colleague C. Scott Brown argued that the Pixel 8a is a good phone, but its value and competitiveness is diminished by the better and frequently-discounted Pixel 8. But suppose you haven’t watched that video and you need to know what’s wrong about the phone in a few words to see if this is worth watching (spoiler: it is good content). You could do what we did and check with Gemini to see what’s wrong or bad about the Pixel 8a. And I think it pretty much nailed the nuance of C. Scott’s argument.
Credit: Rita El Khoury / Android Authority
In the next example above, I asked it for the differences between the Nothing Ear and Ear (a) in my video. It didn’t list every single difference, but focused on the biggest ones and synthesized the most important bits. In in the video, I mention these features and differentiating factors in several places, but not in succession, so once again, it understood that and didn’t make any mistakes in its summary. The answer is incomplete, though, in my opinion, as there are other factors to consider between the two earbud models. But for an early AI version, I’ll consider this a win. (Such is the state of AI summaries now that an accurate answer is counted as a win, even if it’s incomplete).
Ask This Page can find an answer faster than you can say skip
I think the most impressive part of Ask This Video is how easily it can answer a pressing question, without you having to watch the whole video to unearth it. It’s not perfect yet, but in the case of my hands-on with Chipolo’s new Find My Device trackers, it correctly answered that you don’t need a separate app to use the trackers, and in Carlos Ribeiro’s fast-charging myths and truths video, it nailed his recommendation of sticking with 100W cables to keep your gear future-proof.
Ask This Video has the potential to become a genuinely useful feature when skimming videos and looking for answers. Speaking from personal experience, YouTube has become my go-to resource now for specific tutorials and how-tos (I find that the quality there is better than the random hundreds of SEO-targeted written articles), but it’s usually tough to find the exact piece of information I’m looking for in a lengthy video. I used to turn to YouTube’s video transcripts and search for specific keywords in them to quickly find my answer. Gemini should be much faster and more practical than that trick.
Google still has to fine-tune Ask This Page
As with everything AI, and specifically Google AI, things aren’t 100% perfect just yet. We didn’t try to “red team” Ask This Video, we just went for regular tech videos and questions. I’m sure when this feature goes live and people start pushing it to its limits, they could make it give bad, weird, and potentially unacceptable answers.
Going back to our tests, we ran across a couple of instances where Ask This Page wasn’t 100% spot on. In the first example above, we asked it whether the Pixel 8a was powerful and whether there was a better phone, based on my Pixel 8a tests video. The first time it answered, it only used the first half of the video where I compared the 8a against the Pixel 7a and 8, which resulted in a glowing answer in favor of the new phone.
None of that was technically wrong, but it wasn’t the full picture. Since we know that the second half of the video looks at the competition, we tried to rephrase the question to nudge it in the right direction, and that’s where it told us that the OnePlus 12R is a more powerful phone in the same price range.
The problem is that random viewers won’t have this kind of context, so they might take the first answer at face value and not realize that the video went into a different set of comparisons later and that there’s a more capable phone for the same price. This is the kind of context that I’m afraid AI summaries will miss again and again, until they get better at it. As someone who’s only recently become a YouTuber, I’ve seen so many depressing comments from people who didn’t watch my videos and jumped on a word in the title or the intro without seeing the nuance, and I fear these kinds of incomplete or wrong AI answers will create more situations like that where we’ll be blamed for the AI’s failure to summarize or synthesize something correctly.
Credit: Rita El Khoury / Android Authority
The final example is the one where Gemini veered off-track. We asked it about the best analog options among my 10 favorite watch faces for the Pixel and Galaxy Watch and it returned three options. Only one — Nothing Fancy — is correct. Sport XR is a digital watch face and I even say that in the video when I introduce it. Material Stack is also a digital design, though I don’t mention it explicitly. Meanwhile, Gemini failed to find the option that is simply and obviously called “Analogue watch face.” It also missed “Typograph,” another watch face that I explicitly mention as having an analog design.
Let’s face it, though, this is not as dire as those terrible AI results in Google search, but if this kind of simple error can occur with watch faces, then who’s to say what can happen with more nuanced and complicated videos?
We kept our focus on tech in these early tests, but there’s a bit of everything on YouTube, from politics to social issues, cooking tutorials, sports highlights, and more. Even though Google has this ever-present “Gemini may display inaccurate info, including about people, so double-check its responses” notice at the bottom of the pop-up, we all know that most people will eventually just rely on the answer they’re getting. Errors in answers can be very detrimental, both to the viewer and the video creator, as more and more people start relying on Gemini and trusting it with their everyday queries.
Personally, I’m not a fan of this “move fast, break things, and ask for forgiveness later” approach with AI. I would have preferred if Google tested it more and waited for it to mature before throwing it out in the world. But investors and money speak, not users like you and me, so once again, this is another discussion for another day.
An APK teardown of Google’s AICore app shows the expected toggle for activating Gemini Nano features on the Pixel 8 and Pixel 8a.
Google said Nano would come to these phones through Developer Options, so this suggests that the launch is imminent.
However, it’s also possible this would allow users to opt out of Nano on the Pixel 8 Pro, too.
There are many versions of Google’s Gemini. Only one of them actually works using the hardware built into smartphones, though, which is Gemini Nano. So far, Nano support only exists on a handful of phones. Notably, this includes just one Pixel: the Google Pixel 8 Pro. When Google confirmed that Nano support wouldn’t come to the Pixel 8, there was a slight uproar — so much so that Google backtracked and agreed to bring Nano support to the Pixel 8 and, even better, the Pixel 8a, too.
The caveat, though, is that Google is going to force users to manually enable Nano support on the Pixel 8 and 8a through a toggle in developer options, while Pixel 8 Pro users already have the features automatically enabled. This means that the vast majority of Pixel 8/8a users won’t use Gemini Nano features because so few of them will know it’s even an option. Now, thanks to an APK teardown of the recent AICore app from Google, we can see the supposed toggle for this feature, suggesting an imminent launch. Interestingly, it might also mean more control of the feature for Pixel 8 Pro users, too.
First, let’s show you what we found. In the screenshot below, you can see the two toggles we expect to appear in Settings > Developer Options > AICore Settings on the Pixel 8 and Pixel 8a. The first toggle gives permission for AICore to use as many resources as possible (which you will almost certainly want to leave activated), and the second actually turns on Nano.
Credit: C. Scott Brown / Android Authority
We can’t say anything for certain until Google actually announces this, but we assume both of these toggles will be “off” by default. That’s how Google described it, so that’s what we’re going with for now.
In other words, this is the order things should go:
Google announces a Feature Drop that brings Nano support to the Pixel 8 and 8a
In Developer Options, you’ll need to activate the second toggle in the screenshot above and, optionally, the first one
Theoretically, once you do those steps, you should be able to use Gemini Nano features on your Pixel 8 and Pixel 8a.
The very fact that the AICore app has this means we should expect Google to announce Nano support for the Pixel 8/8a very soon, possibly in just weeks or even days.
What about the Pixel 8 Pro?
One of the interesting side-effects of this toggle’s upcoming existence on the Pixel 8 and Pixel 8a is that it might also come to the Pixel 8 Pro. This would, in theory, allow Pixel 8 Pro users to disable Gemini Nano — something that’s currently not possible. As mentioned earlier, Nano support is enabled by default on the Pixel 8 Pro already, and without a toggle like this, there’s no way to turn it off.
Obviously, most people wouldn’t feel the need to disable Nano support, but it is possible that this toggle could give those folks the option. Just like with Pixel 8 and 8a users turning the feature on, Pixel 8 Pro users could follow the same steps to turn it off.
Samsung already allows users to disable/enable specific AI features through its Galaxy AI interface, which is built right into Android settings. Unfortunately, this toggle buried in Developer Options on Pixels wouldn’t be nearly as convenient, but at least it would give users more control, which is almost always a good thing.
Buying a good Father’s Day gift can be tough if you’re on a budget, especially if your dad is already on the tech-savvy side. Sometimes they may claim they don’t want anything, other times they might buy the thing you’re looking to gift without telling anyone. If you need help jogging your brain, we’ve rounded up a few of the better gadgets we’ve tested that cost less than $50. From mechanical keyboards and security cameras to luggage trackers and power banks, each has the potential to make your dad’s day-to-day life a little more convenient.
This article originally appeared on Engadget at https://www.engadget.com/best-gifts-for-dad-under-50-113033738.html?src=rss
Gemini, the Assistant replacement, was offered to me today and I fell for it. I quickly ran it through a list of things I do on a regular basis and for many it worked fine but for my driving offerings it failed to the point I am going to have to switch while I’m in the car.
Update: this has evidently been around a couple of weeks, it was new to me to be bugged to switch.
The main issue is there is no YT Music integration at the moment, but it will pull a list of YouTube videos that you can select, which is essentially useless when driving. What’s worse, for me at least, is that asking it to play the news results in nothing. It can’t, and there’s no way to ask it to ask Assistant to play the news.
Yeah, I like the news and music in the car… this kills that. Or at least it does for me as of the time of writing… betting they fix the music integration pretty quickly as they already have other integration like Google Maps and “navigate home” still works… it’ll just be a long and quiet ride home.
Now it seems to do well at your standard LLM responses, but it does not do continued conversation so I find it asking me a lot of questions and then finding out my spoken answers have gone nowhere.
I suspect this will get better quite quickly, however being Google I suspect the things I use most will be the very last to start working.
Gemini also seems to suffer some identity issues as it believes it’s Assistant in some replies. The replies in general are much more expansive than Google Assistant had and asking follow up questions is quite useful, except I have to say “hey Google” for every follow up because continued conversation doesn’t seem to be a thing here.
Minor issues to something that actually looks like it could be amazing… but I do have to switch it for driving, and there’s no automatic way to switch from Assistant to Gemini or back at the moment. Actually I see no way to switch out at this point and stuck with Gemini telling me conflicting things for changing Assistant, which doesn’t seem to work… womp womp.
Immediately after I wrote this and had received multiple wrong replies it gave me the correct info which is open Gemini, press your profile pic, go to settings, down at the bottom is Digital Assistants from Google, press that and you can switch between Gemini and Google Assistant, which isn’t that hard but you’ll find that Gemini is simply gone from your phone after that.
Not terribly hard to re-install it from the play store, and it appears to survive the next switch, but not exactly smooth.
I am completely out of the loop today for a variety of car related reasons, so I suspect this has something to do with all the Google announcements that I managed to miss and am catching up late in the day on.
Gemini, the Assistant replacement, was offered to me today and I fell for it. I quickly ran it through a list of things I do on a regular basis and for many it worked fine but for my driving offerings it failed to the point I am going to have to switch while I’m in the car.
Update: this has evidently been around a couple of weeks, it was new to me to be bugged to switch.
The main issue is there is no YT Music integration at the moment, but it will pull a list of YouTube videos that you can select, which is essentially useless when driving. What’s worse, for me at least, is that asking it to play the news results in nothing. It can’t, and there’s no way to ask it to ask Assistant to play the news.
Yeah, I like the news and music in the car… this kills that. Or at least it does for me as of the time of writing… betting they fix the music integration pretty quickly as they already have other integration like Google Maps and “navigate home” still works… it’ll just be a long and quiet ride home.
Now it seems to do well at your standard LLM responses, but it does not do continued conversation so I find it asking me a lot of questions and then finding out my spoken answers have gone nowhere.
I suspect this will get better quite quickly, however being Google I suspect the things I use most will be the very last to start working.
Gemini also seems to suffer some identity issues as it believes it’s Assistant in some replies. The replies in general are much more expansive than Google Assistant had and asking follow up questions is quite useful, except I have to say “hey Google” for every follow up because continued conversation doesn’t seem to be a thing here.
Minor issues to something that actually looks like it could be amazing… but I do have to switch it for driving, and there’s no automatic way to switch from Assistant to Gemini or back at the moment. Actually I see no way to switch out at this point and stuck with Gemini telling me conflicting things for changing Assistant, which doesn’t seem to work… womp womp.
Immediately after I wrote this and had received multiple wrong replies it gave me the correct info which is open Gemini, press your profile pic, go to settings, down at the bottom is Digital Assistants from Google, press that and you can switch between Gemini and Google Assistant, which isn’t that hard but you’ll find that Gemini is simply gone from your phone after that.
Not terribly hard to re-install it from the play store, and it appears to survive the next switch, but not exactly smooth.
I am completely out of the loop today for a variety of car related reasons, so I suspect this has something to do with all the Google announcements that I managed to miss and am catching up late in the day on.
The AYN Odin 2 Mini is an upcoming handheld game console from the makers of the AYN Odin and Loki line of devices. The name suggests that this Android-powered game console is a smaller version of the Odin 2 that launched last summer, and it does have many of the same specs, while sporting a […]
Google kicked off its annual Google I/O developer conference today with a two hour keynote where pretty much the only thing the company talked about was AI. But there were a lot of AI features to talk about. In the short term, the Circle to Search feature that’s now available on 100 million Android devices now helps […]
Google has published official documentation of Gemini’s upcoming YouTube Music extension.
The documentation gives examples of prompts and prompt formats that you can use to create more prompts.
The extension is not currently live within Gemini but could launch soon.
We’re expecting some big AI announcements in the coming days, as both OpenAI and Google have events scheduled this week. While ChatGPT could become more of a search engine, Google could be looking to integrate more of Gemini into its ecosystem. Google has been testing a Gemini Extension for YouTube Music, which we have previously detailed with a demo video. Now, we have a better idea of its functionality with a list of commands that Gemini will accept for YouTube Music.
Android Authority contributor Assemble Debug spotted that Google has added official documentation on connecting YouTube Music to Gemini apps. The documentation states that YouTube Music isn’t available in Gemini in Google Messages and that the extension works with English prompts only for now.
We’ve managed to activate YouTube Music as an Extension in the Gemini Android app. This demo shows how the extension will work within Gemini.
The YouTube Music Gemini Extension allows Gemini to access your YouTube Music information to provide better search results.
The extension is not currently live within Gemini but could launch soon.
The race for AI has been underway for years, but ChatGPT pushed the industry from a marathon to a sprint. ChatGPT’s arrival caught Google somewhat off-guard, and the company has since doubled down on its AI efforts under the Gemini rebrand. One way Gemini could reach critical mass is to attract more users who are already present within the Google ecosystem, and Gemini Extensions could achieve this with tighter ecosystem integration. Google has been testing a Gemini Extension for YouTube Music that we have detailed previously, and we now have a better demo video to share with you on how it will work when it rolls out.
Android Authority contributor Assemble Debug managed to activate a new Gemini Extension for YouTube Music in the Google app v15.17.28.29.arm64. With some more work, we got the extension working well enough for a demo.
We’ve managed to activate YouTube Music as an Extension in the Gemini Android app. This demo shows how the extension will work within Gemini.
The YouTube Music Gemini Extension allows Gemini to access your YouTube Music information to provide better search results.
The extension is not currently live within Gemini but could launch soon.
The race for AI has been underway for years, but ChatGPT pushed the industry from a marathon to a sprint. ChatGPT’s arrival caught Google somewhat off-guard, and the company has since doubled down on its AI efforts under the Gemini rebrand. One way Gemini could reach critical mass is to attract more users who are already present within the Google ecosystem, and Gemini Extensions could achieve this with tighter ecosystem integration. Google has been testing a Gemini Extension for YouTube Music that we have detailed previously, and we now have a better demo video to share with you on how it will work when it rolls out.
Android Authority contributor Assemble Debug managed to activate a new Gemini Extension for YouTube Music in the Google app v15.17.28.29.arm64. With some more work, we got the extension working well enough for a demo.
We’ve managed to activate YouTube Music as an Extension in the Gemini Android app. This demo shows how the extension will work within Gemini.
The YouTube Music Gemini Extension allows Gemini to access your YouTube Music information to provide better search results.
The extension is not currently live within Gemini but could launch soon.
The race for AI has been underway for years, but ChatGPT pushed the industry from a marathon to a sprint. ChatGPT’s arrival caught Google somewhat off-guard, and the company has since doubled down on its AI efforts under the Gemini rebrand. One way Gemini could reach critical mass is to attract more users who are already present within the Google ecosystem, and Gemini Extensions could achieve this with tighter ecosystem integration. Google has been testing a Gemini Extension for YouTube Music that we have detailed previously, and we now have a better demo video to share with you on how it will work when it rolls out.
Android Authority contributor Assemble Debug managed to activate a new Gemini Extension for YouTube Music in the Google app v15.17.28.29.arm64. With some more work, we got the extension working well enough for a demo.
You can now launch Gemini in Chrome through its address bar. Type @gemini followed by your prompt to quickly get your response.
Gemini’s mobile app is now also rolling out to more languages and countries.
Extensions are also rolling out to all the languages and countries that Gemini supports.
Gemini is Google’s next big AI product, taking over the reins from Google Assistant. There’s still a long way to go for Gemini to mature, but Google is quickly marching towards there with iterative updates to Gemini. As part of today’s update announcements, Google is rolling out Gemini features to more languages and countries and is making it even easier to access Gemini on Google Chrome.
Launch Gemini in Chrome with @gemini
Chrome’s address bar works as a conventional address bar but also doubles as a quick search box. Now, it will play another role as a quick launcher for Gemini. You can now type @gemini followed by your prompt in the Chrome address to automatically load Gemini’s web app with your response ready to go.
Google’s Gemini app for Android could soon gain a ‘real-time responses’ option.
This will allow you to read a response as it’s being generated instead of waiting for the entire response to be generated first.
This would be in line with the web version of Gemini.
The Gemini assistant didn’t exactly enjoy a smooth launch, but Google has been working to improve the chatbot since then. The latest addition to the service could be faster responses for the Android app, according to a new report.
Gemini assistant could get a feature that will allow it to integrate with third-party music streaming services.
Users will be able to pick their preferred music service to play music.
The feature was found within Gemini Settings.
Gemini assistant has been a great substitute for Google Assistant on many fronts. However, the LLM still has some limitations, like being unable to identify songs or work with third-party music services like Spotify. But Google could soon give Gemini a feature that would drop that limitation.
Google appears to be preparing to give its chatbot a new music-related feature. According to a tip from AssembleDebug provided to PiunikaWeb, Gemini may soon get a “Music” option that allows the user to “select preferred services used to play music.” This feature was discovered within the Gemini Settings page.
In the images below, you can see the feature appears as the second to last option in the list. When you tap on Music, you’re taken to a page where you can “Choose your default media provider.” This page appears empty at the moment with no services listed.
The feature hints that users will soon be able to pick a service that Gemini can integrate with. Once integrated, this would open up the ability to get Gemini to play music with voice commands.
It’s unknown if Google will eventually expand this function to other types of media, like audiobooks or podcasts. Furthermore, there is no information on when the company plans to roll this feature out. But when and if it does, it should be a win for music enthusiasts.
Enlarge/ The M2 Pro Mac mini. (credit: Andrew Cunningham)
Bloomberg's Mark Gurman thinks that Apple's M4 chips for Macs are coming sooner rather than later—possibly as early as "late this year," per a report from earlier this month. Now Gurman says Apple could completely skip the M3 generation for some Macs, most notably the Mac mini.
To be clear, Gurman doesn't have specific insider information confirming that Apple is planning to skip the M3 mini. But based on Apple's alleged late-2024-into-early-2025 timeline for the M4 mini, he believes that it's "probably safe to say" that there's not enough space on the calendar for an M3 mini to be released between now and then.
This wouldn't be the first time an Apple Silicon Mac had skipped a chip generation—the 24-inch iMac was never updated with the M2, instead jumping directly from the M1 to the M3. The Mac Pro also skipped the M1 series, leapfrogging from Intel chips to the M2.
It was revealed on #TheAndroidShow that Gemini Nano is not coming to Pixel 8.
It appears hardware limitations are to blame.
Gemini Nano will come to more high-end devices in the near future.
The Pixel 8 Pro has Gemini Nano, leaving Pixel 8 owners wondering when the on-device AI will come to the base model. It’s looking like the answer to that question is it won’t.
Today, Google pushed out the latest episode of the #TheAndroidShow, where it discussed a number of topics including MWC, Android 15, and Gemini Nano. When the show got to the Q&A portion, however, there was an interesting revelation.
The GPD Win Mini is either a handheld gaming PC with a 7 inch display and a clamshell design with a QWERTY keyboard, touchpad, and built-in game controllers in the bottom section. GPD launched the first model in the fall of 2023, and now the company has launched a new model that’s available for pre-order […]
Lenovo subsidiary Hefei LCFC has designed a portable device called Gemini that combines two 7.8 inch E Ink displays in a way that allows you to use the system as a laptop computer (with a virtual keyboard), an actual notebook (with pen support), or a book (by folding the screen so you can see just […]
Google’s multimodal generative Gemini AI is capable of creating a multitude of outputs, including images. After the tool was caught repeatedly creating inaccurate images of historic scenes, Google has now decided to pull the emergency brake and temporarily paused Gemini’s ability to create images of people altogether.
Gemini is now letting users set reminders through Google Assistant integration and Google Tasks.
The functionality appears to be slowly rolling out in the US.
There are a few things holding Gemini back from being a true replacement for Google Assistant on Android. This includes absent features like routines, media service provider support, and more. But it looks like Gemini is now getting at least one of the functions it was sorely missing.
According to 9to5Google, the Gemini app on Android now has the ability to set reminders with the help of Google Assistant integration and Google Tasks. This means the app can now accept commands like “remind me to turn off lights tonight” or “remind me to grab the mail tomorrow morning.” Whereas before, you would be greeted by an error message or fake acknowledgment when attempting to set reminders.
Formerly known as Google Bard, Google’s AI chatbot boasts a new name and a host of new capabilities. More significantly, a new Gemini app can now replace Google Assistant on your Android phone, assuming your phone is set to US English. I won’t spend too much time diving into the confusing nuances of Google’s AI strategy (or seemingly lack thereof). Instead, I’ve left that scrambled chaos to more eloquent voices, and simply go hands-on with the Gemini app.
Gemini is rolling out to all users who subscribe to the Google One AI Premium plan, allowing them to use AI across Gmail, Docs, Slides, Sheets, and Meet apps.
Google is also launching a new “Chat with Gemini” feature for Gemini for Workspace.
Gemini for Google Workspace is also getting Business and Enterprise tiers, expanding access.
Google’s massive rebranding campaign turned all its AI products into Google Gemini. The rebranding is currently in progress, and starting today, Duet AI for Google Workspace will transition into Gemini for Google Workspace. As part of the move, Google has released a standalone “chat with Gemini” experience for Workspace customers while opening up Gemini access to all consumers through the Google One AI Premium subscription.
Gemini through Google One AI Premium across Google apps
Starting today, Google One subscribers who subscribe to the new $19.99 per month Google One AI Premium plan can access Gemini across their personal Gmail, Docs, Slides, Sheets, and Meet apps. This is rolling out to English language users in more than 150 countries.