FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Gmail can now help you polish your drafts with a tap

Gmail on smartphone and tablet stock photo 3
Credit: Edgar Cervantes / Android Authority
  • Google is enhancing Gmail’s Help me write feature with a new “Polish” option that can help you draft a formal email from rough notes with a tap.
  • Gmail for Android and iOS is also getting new Help me write and Refine my draft shortcuts, making the tools easier to use.
  • These new features are widely available for select Google Workspace subscribers.

Google rolled out significant upgrades for Gmail’s Help me write feature earlier this year, giving users the ability to dictate text prompts to easily draft an email. The company also previewed a new “Polish” feature that could fix drafts to create a structured email at the touch of a button. This feature is now rolling out to users along with new Help me write and Refine my draft shortcuts for Gmail on Android and iOS.

Google announced the rollout in a recent Workspace Updates blog, revealing that the new Polish option will be available as part of Help me write’s Refine my draft feature on Gmail for mobile and web. With this feature, you’ll be able to enter rough notes into a draft and Gemini will “turn the content into a completely formal draft, ready for you to review in one click.”

5 ways talking to Gemini Live is much, much better than using Google Assistant

Since Gemini Live became available to me on my Pixel 8 Pro late last week, I’ve found myself using it very often. Not because it’s the latest and hottest trend, no, but because almost everything I hated about talking to Google Assistant is no longer an issue with Gemini Live. The difference is staggering.

I have a lot to say about the topic, but for today, I want to focus on a few aspects that make talking to Gemini Live such a better experience compared to using Google Assistant or the regular Gemini.

1. Gemini Live understands me, the way I speak

google gemini live natural language question 2

Credit: Rita El Khoury / Android Authority

English is only my third language and even though I’ve been speaking it for decades, it’s still not the most natural language for me to use. Plus, I have the kind of brain that zips all over the place. So, every time I wanted to trigger Google Assistant, I had to think of the exact sentence or question before saying, “Hey Google.” For that reason, and that reason alone, talking to Assistant never felt natural to me. It’s always pre-meditated, and it always requires me to pause what I’m doing and give it my full attention.

Google Assistant wants me to speak like a robot to fit its mold. Gemini Live lets me speak however I want.

Gemini Live understands natural human speech. For me, it works around my own speech’s idiosyncracies, so I can start speaking without thinking or preparing my full question beforehand. I can “uhm” and “ah” mid-sentence, repeat myself, turn around the main question, and figure things out as I speak, and Live will still understand all of that.

I can even ask multiple questions and be as vague or as precise as possible. There’s really no restriction around how to speak or what to say, no specific commands, no specific ways to phrase questions — just no constraints whatsoever. That completely changes the usability of AI chatbots for me.

2. This is what real, continuous conversations should be like

google gemini live interruption correction

Credit: Rita El Khoury / Android Authority

Google Assistant added a setting for Continuous Conversations many years ago, but that never felt natural or all that continuous. I’d say “Hey Google,” ask it for something, wait for the full answer, wait an extra second for it to start listening again, and then say my second command. If I stay silent for a couple of seconds, the conversation is done and I have to re-trigger Assistant again.

Plus, Assistant treats every command separately. There’s no real ‘chat’ feeling, just a series of independent questions or commands and answers.

Interruptions, corrections, clarifications, idea continuity, topic changes — Gemini Live handles all of those.

Gemini Live works differently. Every session is a real open conversation, where I can talk back and forth for a while, and it still remembers everything that came before. So if I say I like Happy Endings and ask for similar TV show recommendations, I can listen in, then ask more questions, and it’ll keep in mind my preference for Happy Endings-like shows.

I can also interrupt it at any point in time and correct it if it misunderstood me or if the answer doesn’t satisfy me. I don’t have to manually scream at it to stop or wait for it as it drones on for two minutes with a wrong answer. I can also change the conversation topic in an instant or give it more precise questions if needed.

Plus, Gemini Live doesn’t shut off our chat after a few seconds of silence. So I can take a few seconds to properly assimilate the answer and think of other clarifications or questions to ask, you know, like a normal human, instead of a robot who has the follow-ups ready in a second.

Better yet, I can minimize Live and go use other apps while still keeping the chat going. I’ve found this excellent while browsing or chatting with friends. I can either invoke Live mid-browsing to ask questions and get clarifications about what I’m reading, or start a regular Live chat then pull up a browser to double check what Gemini is telling me.

3. TL;DR? Ask it for a summary

google gemini live interruption summary

Credit: Rita El Khoury / Android Authority

As I mentioned earlier, every command is a separate instance for Google Assistant. Gemini Live considers an entire chat as an entity, which lets me do something I could never do with Assistant: ask for a summary.

So if I had a chat about places to run around in Paris and test the new Panorama mode on the Pixel 9 series, I can ask it for a summary in the end, and it’ll list all of them. This is incredibly helpful when trying to understand complex topics or get a list of suggestions, for example.

4. Want to talk more about a specific topic? Resume an older chat

google gemini live continue chat

Credit: Rita El Khoury / Android Authority

At one point, I opened Gemini Live and said something like, “Hey, can we continue our chat about Paris panorama photos?” And it said yes. I was a bit gobsmacked. So I went on, and it seemed to really know where we left off. I tried that again a few times, and it worked every time. Google Assistant just doesn’t have anything like this.

Another way to trigger this more reliably is to open Gemini, expand the full Gemini app, tap on Recents and open a previous chat. Tapping on the Gemini Live icon in the bottom right here allows you to continue an existing chat as if you never stopped it or exited it.

5. Check older chats and share them to Drive or Gmail

google gemini live export docs gmail

Credit: Rita El Khoury / Android Authority

Viewing my Google Assistant history has always been a convoluted process that requires going to my Google account, finding my personal history, and checking the last few commands I’ve done.

With Gemini, it’s so easy to open up previous Live chats and read everything that was said in them. Even better, every chat can be renamed, pinned to the top, or deleted in its entirety. Plus, every response can be copied, shared, or quickly exported to Google Docs or Gmail. This makes it easy for me to manage my Gemini Live data, delete what needs to be deleted, and share or save what I care about.

Google Assistant still has a (significant) leg up

google gemini live calendar fail

Credit: Rita El Khoury / Android Authority

Despite everything Gemini Live does well, there are so many instances where I felt its limitations while using it. For one, the Live session is separate from the main Gemini experience, and Live only treats general knowledge questions, not personal data. So I can ask Gemini (not Live) about my calendar, send messages with it, start timers, check my Drive documents, control my smart home, and more, just as I could with Assistant, but I can’t do any of that with Gemini Live. The latter is more of a lively Google Search experience and all the regular Gemini extensions aren’t accessible in Live. Google said it was working on bringing them over, though, and that is the most exciting prospect for me.

Gemini Live still doesn't have access to personal data, calendars, smart home, music services, etc...

Because of how it’s built and what it currently does, Gemini Live requires a constant internet connection and there’s nothing you can do without it. Assistant is able to handle some basic local commands like device controls, timers, and alarms, but Gemini Live can’t.

And for now, my experience with multiple language in Gemini Live support has been iffy at best — not like Assistant’s support of multiple languages is stellar, but it works. On my phone, which is set to English (US), Gemini Live understands me only when I speak in English. I can tell it to answer in French, and it will, but it won’t understand me or recognize my words if I start speaking French. I hope Google brings in a more natural multilingual experience to it, because that could be life-changing for someone like me who thinks and talks in three languages at the same time.

google gemini live fullscreen listening

Credit: Rita El Khoury / Android Authority

Logistically, my biggest issue with Gemini Live is that I can’t control it via voice yet. My “Hey Google” command opens up the main Gemini voice command interface, which is neat, but I need to manually tap the Live button to trigger a chat. And when I’m done talking, the chat doesn’t end unless I manually tap to end it. No amount of “thank you,” “that’s it,” “we’re done,” “goodbye,” or other words did the trick to end the chat. Only the red End button does.

Google Assistant was a stickler for sourcing every piece of info; Gemini Live doesn't care about sources.

Realistically, though, my biggest Gemini Live problem is that there’s no sourcing for any of the info it shares. Assistant used to be a stickler for sourcing everything; how many times have you heard say something like, “According to [website];” or, “on the [website], they say…?” Gemini Live just states facts, instead, with no immediate way to verify them. All I can do is end the chat, go to the transcript, and check for the Google button that appears below certain messages, which shows me related searches I can do to verify that info. Not very intuitive, Google, and not respectful to the millions of sites you’ve crawled to get your answer like, uh, I don’t know… Android Authority perhaps?

Pixel 9 is getting 10 new AI features, but at least one isn’t ready yet

  • Google is launching several new AI features for the Pixel 9, including the ability to talk to Gemini using voice. There’even a new app for generating AI images, called Pixel Studio.
  • One of the biggest features is called Add Me, and it lets you virtually add the original photographer to a group photo by stitching together two images.
  • There is also a Video Boost update also includes several improvements, including 8K sampling and HDR Plus. This is only for Pro users and will arrive a little after the phone’s initial arrival.


The Google Pixel 9 series will arrive with Android 14 instead of Android 15, but there are still plenty of software improvements to be found here, especially when it comes to AI. There are at least ten new AI features that we are aware of, with some of the most exciting additions being Gemini Live, Add Me, and Pixel Studio.

Historically Google has announced new software and AI features but not all of them have rolled out right away. The good news is that most of the new features are arriving at launch, though at least a few won’t be ready at launch. With that in mind, let’s jump right in and take a brief look at some of the biggest new AI features on the Pixel 9 series.

Pixel 9 AI features that are ready from day one

Google Pixel 9 Pro home screen display next to other Pixel phones

Pixel 9 Pro
Credit: C. Scott Brown / Android Authority

Let’s start with all the Pixel 9 features that are live from day one. All of these are available for the entire Pixel 9 series unless otherwise indicated.

Magic Editor adds auto frame and Reimagine

There are two new Magic Editor features, both of which will at least be temporarily exclusive to the Pixel 9 family. The former automatically frames your selected target, even if that requires expanding the photo using AI. The second feature lets you swap out backgrounds to add fireworks, pink clouds, and more.

Google Keep Magic List

You can now talk to Gemini and have it make you a grocery or to-do list using Google Keep. You don’t even have to put specific list items, just say the meals, and it can do the rest. Obviously, how well this works will probably depend on how specific you get. We hope that you can even give it specific sites with the recipes you want, and it will do the work, but for now, that remains unclear until we have more hands-on time with the devices.

Gemini Live

Gemini Live 1

You can now have live natural conversations with Gemini using Gemini Live, with your choice of ten different voices to pick from. While it will be available from day one, it is initially exclusive to Gemini Advanced users. We’ve tested Gemini Live out for ourselves and found it to be very impressive so far.

Pixel Screenshots

Pixel Screenshots app open on a Google Pixel 9 Pro Fold's inner display

Credit: C. Scott Brown / Android Authority

Pixel Screenshots uses on-device AI to analyze all your screenshots. You can then ask Gemini questions and it can pull up information from the screenshots in the form of easily digestable answers.

Call Notes

Call Notes is built into the phone app and lets you record your calls. From there it will create a transcript and use Gemini to create a brief summary of the call. You can even search for these summaries and transcripts at any time in the future just by asking Gemini. This feature may not be available at launch in all regions, so your mileage will vary.

Pixel Studio

Google Pixel 9 in Peony showing Pixel Studio app

Pixel 9
Credit: C. Scott Brown / Android Authority

Pixel Studio is a brand new AI-powered app using Imagen 3. You can create new images through text prompts easily, but that’s not all. There’s even the ability to edit and modify these images after they are created. This allows you to better refine the image on the fly without having to completely generate a new one.

Pixel Weather

Google Pixel 9 Pro with new Weather app open showing a rainy day in Weehawken

Credit: C. Scott Brown / Android Authority

This isn’t just a regular weather app, as it adds a few extra AI features to the mix including the ability to make AI weather summaries about the expected conditions and more. Pixel Weather is far from the most exciting addition to Google’s Pixel AI feature set, but it’s still a nice extra.

Add Me

Google Pixel 9 Add Me Feature Final Photo

Completed photo
Credit: C. Scott Brown / Android Authority

Add Me is arriving at launch but will initially be listed as Preview (beta) feature. Add Me lets one user take a group photo, and then another user swaps out while the first user takes the spot they would have occupied if they could have been in the image with everyone the first time. Gemini then takes these two images and stitches them together, making it look like the whole group was all present in the shot at once.

Pixel 9 AI features that won’t be ready until later

While most of the features above will be ready right away, it seems that a major update to Video Boost is on its way in the future, but won’t be ready for launch.

Video Boost with Night Sight technology

Credit: Google

Video Boost was introduced last year as a way to improve video quality, so it’s technically not new, but we’re counting it due to just how big an update this is. Rendering is now 2x faster, it works on zoom up to 20x, and there’s even support for AI 8K scaling. There’s also HDR Plus support in the works.

Asking for Gemini’s help on Android is getting a lot prettier

Google Gemini logo on smartphone stock photo (5)

Credit: Edgar Cervantes / Android Authority

  • Google is rolling out a new floating overlay panel for Gemini on Android devices, featuring a subtle glow animation.
  • The panel allows Gemini responses to appear within the current app and enables a contextual understanding of on-screen content.
  • The update also includes a new “Ask about this video” chip that lets users ask questions about YouTube videos directly.


Google recently unveiled a series of exciting updates for its AI assistant, Gemini, during the Pixel 9 series launch event. While the introduction of Gemini Live mode stole the spotlight, Gemini is also getting a shiny new floating overlay panel for Android. (h/t: 9to5Google)

This new interface, currently being rolled out to Android users, features a visually pleasing glow animation that surrounds the panel whenever Gemini is activated. This subtle glow not only looks neat but is also a sign that your Gemini’s got a new trick up its sleeve: a contextual overlay that understands what you’re up to without taking over your whole screen.

This update was initially teased at the I/O 2024 conference in May and allows Gemini to deliver responses directly within the app you’re using rather than hijacking your entire screen. This design change aims to help users maintain focus on their current tasks while still benefiting from Gemini’s assistance. For those who prefer the more traditional, immersive experience, a quick tap on the top-right corner of the overlay will still expand it to full screen.

Additionally, the update also includes a new “Ask about this video” chip, replacing the previous “Ask about this screen” prompt, which appears when Gemini is triggered on YouTube videos. This feature allows users to request summaries or pose follow-up questions about the video’s content.

As for the glowing floating overlay, it’s still in the process of rolling out, so if you haven’t seen it yet, hang tight. Google says it’ll be hitting more Android devices in the coming weeks, both for regular Gemini users and those with Gemini Advanced subscriptions.

All in all, these updates are setting the stage for a more seamless and engaging Gemini experience. If you’re an Android user, keep an eye out for that glow.

'Help me write' gets a new shortcut in the Gmail app as Gemini learns to refine your drafts

Google's 'Help me write,' a tool that essentially started out as an AI suggestion feature for Gmail to help you complete common sentences, expanded to Chrome earlier this year and has evolved into a robust writing companion. Powered by Gemini, the tool's functionality includes writing suggestions and rewrites, with more significant updates rolling out now that will enhance its ability to polish and refine your email drafts.

Apple Intelligence vs Google Gemini: Which AI platform makes your phone more useful?

apple macbook logo google pixelbook 2

Credit: Rita El Khoury / Android Authority

Just as smartphone hardware and processing power reached maturity, tech giants have found a new way to sell you an upgrade: AI. Even Apple has jumped on the bandwagon as it gears up to launch a revamped Siri and Apple Intelligence via an iOS 18 update later in 2024. Google, meanwhile, has pushed Android brands to adopt its Gemini family of language models for everything from image editing to translation since last year. But while Apple Intelligence and Google Gemini may look similar on paper, the two couldn’t be more different in reality.

Even though Apple tends to take a more conservative approach when adopting new technologies, its AI push has been swift and comprehensive. With that in mind, let’s break down how Apple Intelligence differs vs Google Gemini and why it matters.

Apple Intelligence vs Google Gemini: Overview

The biggest difference between Apple Intelligence and Gemini is that Apple Intelligence is not anchored to any single app or function. Instead, it refers to a wide variety of features available across the iPhone, iPad, and Mac. In other words, Apple has made AI as invisible as possible — you may not even realize its presence outside of certain obvious instances like Siri.

On the other hand, Gemini started its life as a chatbot to compete with the likes of ChatGPT and has gone on to replace the Google Assistant. Even though Gemini’s capabilities extend beyond chat, features like text summarization and translation can vary depending on your smartphone of choice. For example, Samsung’s Galaxy AI offers a different set of AI features than those found on Google Pixel devices, even though both companies use (and advertise) Gemini Nano.

Gemini's feature set differs from one device to another, while Apple Intelligence is standard.

While introducing Apple Intelligence, the Cupertino giant also made a big deal about its commitment to privacy. Cloud-based AI tasks will be performed strictly on Apple’s servers on the company’s own hardware. More importantly, human-AI interactions will not be visible to anyone besides the user, not even to Apple.

However, the platform isn’t entirely closed off either; Apple has announced a ChatGPT integration for complex Siri queries. Rumors also indicated that the company may offer responses from Gemini as an alternative to ChatGPT. This willingness to include third-party integrations marks a significant shift for Apple, which traditionally prefers to keep its ecosystem tightly controlled. However, it offers Apple a way to offload blame if the AI model responds with unsafe or misleading information.

Are Apple Intelligence and Gemini free?

Yes, both Apple Intelligence and Gemini are free to use. However, Google offers an optional paid tier called Gemini Advanced that unlocks higher quality responses, thanks to a larger language model. While Apple doesn’t directly charge for its AI features, you can link a ChatGPT Plus account to access paid features like access to the latest GPT-4o model.

Apple Intelligence vs Google Gemini: Features compared

google gemini ask this video

Credit: Rita El Khoury / Android Authority

Both Apple Intelligence and Gemini power a slew of AI features, but they differ slightly in terms of their implementation and availability. Here’s a quick rundown:

  1. Assistant: Using the power of large language models, Apple’s Siri and Google’s Gemini can both act as capable digital assistants and answer any question under the sun. Based on Apple’s demos, the new Siri has a clear advantage as it can coordinate actions across apps. For example, you can ask it to send photos from a specific location to your contact without opening either the Photos or Messages app. Gemini cannot perform this kind of functionality yet.
  2. Screen context and personalization: Apple Intelligence can access information on your screen before responding. It can also access texts, reminders, and other data across Apple apps in the background. With Gemini, you have to manually tap “Add this screen” each time to let the AI read it.
  3. Photo editing: Google uses Imagen 2 instead of Gemini for image-related tasks but it’s still surprisingly capable — Magic Editor in Google Photos can remove objects, replace the sky, and more. Samsung also uses the same model for its Photo Edit feature. Apple Intelligence adds an object cleanup tool to the Photos app but it does not offer as many AI editing options as Magic Editor.
  4. AI image generator: Apple’s Image Playground is a new app that creates AI-generated images and emoji, either based on your contacts or custom descriptions. These can be easily dropped into chat apps. Gemini can generate images too, but only via typed prompts.
  5. Mail and productivity: While you can find Gemini in many Google apps these days, the majority of features are unfortunately locked behind Gemini Advanced. Help Me Write in Gmail and Google Docs, for example, won’t appear without the subscription. Apple’s Mail app, on the other hand, will summarize your emails using an on-device model. A feature called Smart Reply will also generate a reply on your behalf after asking relevant questions based on the incoming email.
  6. Writing tools: Apple leads in this area as you can select any piece of text across the operating system and perform AI language tasks like proofreading, summarization, and paraphrasing. Galaxy AI offers similar tools via Samsung Keyboard’s Gemini Nano integration but you won’t find it on every Android phone. In fact, it’s even missing on Google’s Pixel devices and Gboard.

Apple Intelligence and Gemini: Supported devices and availability

Samsung Galaxy S24 vs Google Pixel 8 vs Apple iPhone 15 in hand

Credit: Robert Triggs / Android Authority

Since many Apple Intelligence features utilize an on-device large language model, we knew that Apple would only bring it to relatively modern devices. However, the company has gone further than many expected and locked the entire suite to the iPhone 15 Pro series. That’s right — the regular iPhone 15 series (and earlier models) will not support Apple Intelligence, even the parts that rely entirely on the cloud.

Google, meanwhile, has done a commendable job of bringing Gemini to as many Android devices as possible. The chatbot is available on every single Android smartphone, for instance, and even features powered by the on-device Gemini Nano model is available on more devices like the Pixel 8a.

Apple Intelligence won't be available to the vast majority of users for the foreseeable future.

Over on the computing side, Apple is more generous as it will deliver AI features to all macOS devices dating back to the M1 chip from 2020. Meanwhile, Google offers limited Gemini features in ChromeOS and you can only use it on newer Chromebook Plus machines. That said, the Gemini chatbot is accessible via a web browser on any computer.

Availability is another sore spot for Apple Intelligence. It will not launch in the UK, European Union, and China. You will also need to set your device to the “US English” locale. While these restrictions may be relaxed at some point, Gemini is far ahead of the curve as it supports all major languages and regions.

Apple Intelligence vs Google Gemini: Privacy

Apple WWDC 2024 ai recording calls

Credit: Apple

Adding AI to anything is risky — the technology is prone to hallucinating and generating misleading information that could ruin a brand’s reputation. Just ask Google; the company faced backlash over its AI Overviews feature in search and has since walked it back. Another risk is data privacy — nobody wants to share sensitive data only to have it leaked or used to train future AI.

Apple has countered this problem by using a model small enough to power many AI features entirely on-device. It also employs a strategy called Private Cloud Compute, which works as follows:

When a user makes a request, Apple Intelligence analyzes whether it can be processed on device. If it needs greater computational capacity, it can draw on Private Cloud Compute, which will send only the data that is relevant to the task to be processed on Apple silicon servers. When requests are routed to Private Cloud Compute, data is not stored or made accessible to Apple, and is only used to fulfill the user’s requests.

Gemini, meanwhile, also comes in multiple variations. The smallest language model, Gemini Nano, runs on your device even and is the most private option. We have a list of every Gemini Nano-powered feature on Pixel devices, and Galaxy AI has a similar feature set too.

For more complex tasks, however, you will need to use Google’s cloud-based Gemini models. And unsurprisingly, the search giant’s privacy policies aren’t as user-friendly — Gemini interactions are not only stored on Google’s servers, but they are also used to train and improve future language models. In other words, it’s the complete opposite of what Apple offers and you may want to avoid sharing sensitive information with Google’s chatbot. You can opt out of AI model training on Gemini but that comes at the cost of losing access to chat history and Extensions that allow the chatbot to access data from Gmail and other Google apps.


Overall, both AI platforms trade blows in terms of features, at least on paper. However, Apple Intelligence’s deep OS-level integration makes it far more useful day-to-day than Gemini. The only downside is that you will need the latest iPhone — and the Pro model at that. Google and Samsung may not offer the same depth, but they have done a remarkable job of bringing AI to older or less expensive devices.

Google Phone’s AI-powered scam detection looks nearly ready to go (Update: Screenshot)

  • Gemini Nano-powered scam call detection is on its way, and the Phone app is showing early evidence.
  • Phone will start differentiating between spam calls and scam calls.
  • In addition to automatically detecting scam, users may be able to manually report suspicious calls.


Update, August 1, 2024 (08:15 AM ET): We have managed to activate some of the UX of the upcoming feature.

Google Phone Scam Call Detection

Credit: Assemble Debug / Android Authority

As you can see, the user will be asked to choose between reporting the call as spam or as a scam.


Original article, July 25, 2024 (12:19 PM ET): Are you sold yet on the potential of AI? Smartphone features powered by AI feel like the only thing manufacturers are talking about anymore, but how many of those are actually useful tools you’re interested in, and how many seem more like fancy tech demos? Even if you’re still waiting for that killer app, we’ve got reason to be optimistic, and have heard about a few compelling projects in the works, like Google using Gemini Nano to keep you safe from scammers on voice calls. As we wait to get full details from Google on how that will arrive, we’re already seeing some early evidence of it in the Phone app.

Opening up the new Google Phone 138 beta release, we spot a number of text strings that sound related to this incoming functionality:

<string name="report_call_as_scam_action">Report call as scam</string>
<string name="report_call_as_scam_details">Unknown callers asking for your personal, financial, or device info</string>
<string name="report_call_as_spam_action">Report call as spam</string>
<string name="report_call_as_spam_details">Nuisance calls, irrelevant or unsolicited promotions, offers, etc.</string>
<string name="block_or_report_details">Information reported will only be used by Google to improve spam & scam detection.</string>

The first takeaway there is the distinction being drawn between scams and spam; right now, the app’s formal focus is only on spam (though we could see scams counting as “unwanted calls” in general). But going forward, Google Phone is preparing to be explicit about the difference.

We also notice that this seems to describe a system for manually reporting calls. The way Google talked about it back at I/O, it sounded like Gemini would be making the decision about characterizing the call as legitimate or not, so it’s interesting to consider that we may also be able to flag calls that Google misses.

SHARPIE_USER_DISMISSED_SCAM
SHARPIE_USER_CONFIRMED_SCAM
SHARPIE_SCAM_DETECTED
SHARPIE_SESSION_STARTED
SHARPIE_PRECONDITIONS_SUCCEEDED
SHARPIE_PRECONDITIONS_FAILED
SHARPIE_SETTINGS_UNKNOWN
SHARPIE_SETTINGS_AUTO_ENROLLED
SHARPIE_SETTINGS_MANUALLY_ENROLLED
SHARPIE_SETTINGS_OPTED_OUT

“Sharpie,” if you haven’t surmised just yet, is Google’s internal codename for this system. All these functions and variables present in the app code shine further light on how Phone’s AI scam detection will work and arrive. The first two you see here appear to correspond with the user interaction options Google presented back when announcing the feature in May:

Google Phone scam detection dialog box

Credit: Google

Even though all the processing for scam detection will take place on your phone, limiting the privacy implications of using it, Google’s been clear that it’s not forcing this on anyone, and the system will be opt-in when it arrives. That has us raising our eyebrows just the slightest bit at what appears to be a flag for automatically using AI scam detection — perhaps it’s intended for managed devices, or for use with Family Link?

While we still have plenty of questions about how Phone’s scam detection will arrive, and exactly how it will operate once it does, we’re hugely excited about the idea of it getting here. Scam calls are a serious problem affecting some of society’s most vulnerable members, and it’s not always easy to teach people how to recognize when they’re being taken advantage of. If we can offload some of that burden onto AI-powered systems, there’s the real potential to help protect a lot of people.

Hopefully the progress we’ve spotted indicates that Google is well on its way to getting this system running. Will it debut with the Pixel 9 in just a few more weeks? We’ll know soon!

Here’s an early look at Gemini’s Keep, Tasks, and Calendar extensions (APK teardown)

Google Gemini logo on smartphone stock photo (7)

Credit: Edgar Cervantes / Android Authority

  • We’ve managed to get an early look at Gemini’s upcoming Google Keep, Tasks, and Calendar extensions.
  • The Google Keep extension will let you create new notes and lists, add information to notes, and edit existing lists.
  • The Tasks and Calendar extensions will let you create new tasks and events and view existing tasks and events.


Gemini will soon get a host of new extensions that will enable integration with various Google services. We recently shared details about a few of the upcoming extensions, and we’ve now managed to get an early look at the Keep, Tasks, and Calendar extensions ahead of the rollout.

As you can see in the following video, the Google Keep extension allows you to ask Gemini to create new notes and lists, add information to notes, and add or remove items from lists.

The Google Tasks extension, on the other hand, lets you use Gemini to create new tasks, such as reminders. It also lets you easily view existing tasks and show their due dates.

Finally, the Google Calendar extension lets you create new calendar events, view all upcoming calendar events or ones on a specific date, and edit calendar events.

Google first teased these extensions at I/O earlier this year in May, and they finally seem ready for rollout. The extensions will likely let users do more than what we’ve shown in the videos above, but we’ll have to wait until the official release to get a complete picture of the extensions’ capabilities.

Spotify could soon get its own Gemini extension, and here’s how it could work

Google Gemini logo on smartphone stock photo (2)

Credit: Edgar Cervantes / Android Authority

  • A teardown of the Google app has revealed a Spotify extension for Gemini.
  • We were able to activate the extension and play music from Spotify via the Google chatbot.


Gemini offers support for extensions, allowing you to use other Google and third-party apps within the voice assistant. The company currently offers a YouTube Music extension, but what if you use a different music streaming platform?

An Android Authority teardown of the Google app (version 15.30.27.29) has revealed that a Spotify extension for Gemini is in the works. A description of the extension (seen in the first image below) suggests that this will let you play both music and podcasts.

We were able to get the feature working on our phone, asking Gemini to play a song via Spotify. Gemini briefly shows a YouTube Music info card after processing the request, but the song indeed plays via Spotify. It’s also worth noting that the chatbot can play music via Spotify in the background instead of launching the app. Check out our video below for a better idea of how this extension works.

Nevertheless, this is good news as there are hundreds of millions of Spotify users out there. So you don’t have to sign up for YouTube Music Premium if you want Gemini integration on your Android device.

This isn’t the only upcoming Gemini extension we’ve spotted in recent days. We discovered that several more unannounced Gemini extensions are in the works, including Google Home, the Phone app, and Utilities. This is all encouraging news if you thought Gemini could use more comprehensive integration with other apps and services.

Google pulls its terrible pro-AI “Dear Sydney” ad after backlash

A picture of the Gemini prompt box from the

Enlarge / The Gemini prompt box in the "Dear Sydney" ad. (credit: Google)

Have you seen Google's "Dear Sydney" ad? The one where a young girl wants to write a fan letter to Olympic hurdler Sydney McLaughlin-Levrone? To which the girl's dad responds that he is "pretty good with words but this has to be just right"? And so, to be just right, he suggests that the daughter get Google's Gemini AI to write a first draft of the letter?

If you're watching the Olympics, you have undoubtedly seen it—because the ad has been everywhere. Until today. After a string of negative commentary about the ad's dystopian implications, Google has pulled the "Dear Sydney" ad from TV. In a statement to The Hollywood Reporter, the company said, "While the ad tested well before airing, given the feedback, we have decided to phase the ad out of our Olympics rotation."

The backlash was similar to that against Apple's recent ad in which an enormous hydraulic press crushed TVs, musical instruments, record players, paint cans, sculptures, and even emoji into… the newest model of the iPad. Apple apparently wanted to show just how much creative and entertainment potential the iPad held; critics read the ad as a warning image about the destruction of human creativity in a technological age. Apple apologized soon after.

Read 9 remaining paragraphs | Comments

Google is bringing Gemini to teens with school accounts

Google Gemini logo on smartphone stock photo (7)
Credit: Edgar Cervantes / Android Authority
  • Google says Gemini is coming to teens with educational accounts in the coming months.
  • The chatbot will be available in English to teens with school accounts in over 100 countries.

Google already offers Gemini to teenagers using their personal accounts, but teens weren’t able to use the chatbot with their educational accounts.

Now, Google has announced that Gemini is coming to teenagers via their school-issued accounts in the “coming months.” The company added that this option will be available in English in over 100 countries.

Has Google done anything unethical? Gemini changes its answer mid-sentence

Google Gemini logo on smartphone stock photo (5)
Credit: Edgar Cervantes / Android Authority
  • Gemini on mobile changes its answer mid-sentence when asked whether Google is unethical.
  • The chatbot begins to answer affirmatively before it replaces this response with a non-answer.
  • Gemini on the web gives a more comprehensive response about Google’s ethical concerns, though.

There’s no shortage of concerns about Google’s ethics, ranging from its privacy issues to YouTube’s promotion of toxic videos. However, it looks like Google might be barring Gemini on mobile from answering questions about its parent company’s ethics.

Athenil noticed that Gemini on mobile abruptly changed its answer when asked whether Google had done anything unethical. We were able to reproduce this on our own phone — check out our video below.

Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarks

The Anthropic Claude 3 logo, jazzed up by Benj Edwards.

Enlarge (credit: Anthropic / Benj Edwards)

On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of "3.5" models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window.

So far, people outside of Anthropic seem impressed. "This model is really, really good," wrote independent AI researcher Simon Willison on X. "I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump)."

As we've written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).

Read 17 remaining paragraphs | Comments

Apple confirms that it wants Gemini supported on iPhones

  • Post-keynote at WWDC24, key Apple executives confirmed that the company wants Google’s Gemini on iPhones.
  • In fact, the company wants all manners of AI systems powered by large language models.
  • For now, only OpenAI is an official partner with Apple, which brings ChatGPT to the Apple Intelligence system.

At WWDC24 today, Apple took the wraps off Apple Intelligence, the company’s new umbrella name for the various generative AI tools that will soon come to iPhones, iPads, and Mac computers. To make at least some of these AI features a reality, Apple is relying on a partnership with OpenAI, creators of ChatGPT.

However, weeks before the WWDC24 keynote, we heard rumors that Apple was also in talks with Google about bringing that company’s Gemini to iPhones. That didn’t appear to pan out, but that doesn’t mean the deal is off the table.

AICore update that brings Gemini Nano to the Pixel 8 is now rolling out

Google Pixel 8 vs Google Pixel 8a laying flat

Credit: Ryan Haines / Android Authority

  • Google had confirmed that Gemini Nano would be coming to the Google Pixel 8 and Pixel 8a.
  • The Android AICore app update with the toggle responsible for enabling Gemini Nano is now rolling out for the Pixel 8, but it hasn’t been spotted on the Pixel 8a yet.
  • Enabling the setting doesn’t immediately begin downloading the Gemini Nano module just yet, though, so you’ll likely have to still wait for a server-side rollout.


Google’s Gemini Nano is an AI model that works on-device to execute AI tasks. It’s the smallest AI model in the Gemini family, but it is very important for all the internet-free, on-device AI capabilities it brings to a smartphone. Google’s Pixel 8 Pro is the only Pixel that can use Gemini Nano for Pixel AI capabilities, but the company caved in after an uproar to bring Gemini Nano to the Pixel 8 and Pixel 8a, too. Just yesterday, we spotted the toggle for enabling on-device generative AI features within the Android AICore app, and now we can confirm that the Android AICore app update is rolling to the Pixel 8 at least.

My colleague Adamya Sharma received the Android AICore app update on her Pixel 8, which includes the toggle. Curiously, the Pixel 8 Pro app was also updated, but there is no toggle in it. We couldn’t locate the Android AICore app for the Pixel 8a, though.

As you can see, the Android AICore app versions for the Pixel 8 and Pixel 8 Pro are different. The Pixel 8 Pro already has Gemini Nano features, and a toggle like this would allow users to disable AI features, which isn’t possible currently. On the Pixel 8 and the Pixel 8a, this toggle will allow users to enable AI features, as the phones lack them out of the box. The toggle isn’t enabled by default on these two phones.

Google will likely bring Gemini Nano support to the Pixel 8 and Pixel 8a in a future Feature Drop, so toggling this setting right now on the Pixel 8 doesn’t immediately begin downloading the Gemini Nano module. When the feature is rolled out, users will need to activate Developer Options and then toggle this new “Enabled on-device GenAI features” setting, which is present at Settings > System > Developer options > AICore Settings.

Did you receive the Android AICore app update on your Pixel smartphone? Did the toggle begin downloading Gemini Nano on your Pixel 8 or Pixel 8a? Let us know in the comments below!

Google Gemini ‘Ask This Video’ hands-on: The power of YouTube in a snap

Yesterday, we shared with you a preview of what you can do with Google’s new Gemini-powered “Ask This Page” feature, which was announced at I/O 2024. Today we’re getting our hands on another upcoming “Ask This…” feature, the one that works on YouTube videos.

Just like yesterday, this is an early hands-on preview with Ask This Video. The feature is not live yet, but Android Authority managed to activate it in the Google app. So, while we tried to push it a bit and see what it can do and where it might fail, there could still be room for improvement before Google launches it to the public.

Gemini Ask This Video: What it is and how it works

google gemini ask this video

Credit: Rita El Khoury / Android Authority

Ask This Video is an upcoming Gemini-powered generative AI feature that helps you ask questions about any YouTube video you’re currently watching. Instead of scrubbing and skipping through different parts of that video to find a specific bit of information, you’ll be able to query Gemini and it’ll try to find the answer in that video, without coloring outside the lines. In theory, this should be a big time-saver if you’re looking for a specific information in a YouTube video and you don’t want to waste time trying to find it.

To activate Ask This Video, you just tap and hold the power button to pull up Gemini on your Android phone while watching a YouTube video. Gemini is context-aware now, so it’ll know you triggered it in YouTube and surface an “Ask this video” chip on top of the pop-up menu. See the image above for reference.

Tap that and you’ll notice that Gemini has now attached the video to the pop-up, so you can start typing questions in your natural language and Google’s AI will try to find answers. It takes about 6-8 seconds for Gemini to process the request and come back with an answer.

Ask This Page understands nuance sometimes

In the example above, you can see we asked Gemini about Android Authority‘s “Pixel 8a is here, but why” video where my colleague C. Scott Brown argued that the Pixel 8a is a good phone, but its value and competitiveness is diminished by the better and frequently-discounted Pixel 8. But suppose you haven’t watched that video and you need to know what’s wrong about the phone in a few words to see if this is worth watching (spoiler: it is good content). You could do what we did and check with Gemini to see what’s wrong or bad about the Pixel 8a. And I think it pretty much nailed the nuance of C. Scott’s argument.

google gemini ask this video answer 5

Credit: Rita El Khoury / Android Authority

In the next example above, I asked it for the differences between the Nothing Ear and Ear (a) in my video. It didn’t list every single difference, but focused on the biggest ones and synthesized the most important bits. In in the video, I mention these features and differentiating factors in several places, but not in succession, so once again, it understood that and didn’t make any mistakes in its summary. The answer is incomplete, though, in my opinion, as there are other factors to consider between the two earbud models. But for an early AI version, I’ll consider this a win. (Such is the state of AI summaries now that an accurate answer is counted as a win, even if it’s incomplete).

Ask This Page can find an answer faster than you can say skip

 

I think the most impressive part of Ask This Video is how easily it can answer a pressing question, without you having to watch the whole video to unearth it. It’s not perfect yet, but in the case of my hands-on with Chipolo’s new Find My Device trackers, it correctly answered that you don’t need a separate app to use the trackers, and in Carlos Ribeiro’s fast-charging myths and truths video, it nailed his recommendation of sticking with 100W cables to keep your gear future-proof.

Ask This Video has the potential to become a genuinely useful feature when skimming videos and looking for answers. Speaking from personal experience, YouTube has become my go-to resource now for specific tutorials and how-tos (I find that the quality there is better than the random hundreds of SEO-targeted written articles), but it’s usually tough to find the exact piece of information I’m looking for in a lengthy video. I used to turn to YouTube’s video transcripts and search for specific keywords in them to quickly find my answer. Gemini should be much faster and more practical than that trick.

Google still has to fine-tune Ask This Page

As with everything AI, and specifically Google AI, things aren’t 100% perfect just yet. We didn’t try to “red team” Ask This Video, we just went for regular tech videos and questions. I’m sure when this feature goes live and people start pushing it to its limits, they could make it give bad, weird, and potentially unacceptable answers.

Going back to our tests, we ran across a couple of instances where Ask This Page wasn’t 100% spot on. In the first example above, we asked it whether the Pixel 8a was powerful and whether there was a better phone, based on my Pixel 8a tests video. The first time it answered, it only used the first half of the video where I compared the 8a against the Pixel 7a and 8, which resulted in a glowing answer in favor of the new phone.

None of that was technically wrong, but it wasn’t the full picture. Since we know that the second half of the video looks at the competition, we tried to rephrase the question to nudge it in the right direction, and that’s where it told us that the OnePlus 12R is a more powerful phone in the same price range.

The problem is that random viewers won’t have this kind of context, so they might take the first answer at face value and not realize that the video went into a different set of comparisons later and that there’s a more capable phone for the same price. This is the kind of context that I’m afraid AI summaries will miss again and again, until they get better at it. As someone who’s only recently become a YouTuber, I’ve seen so many depressing comments from people who didn’t watch my videos and jumped on a word in the title or the intro without seeing the nuance, and I fear these kinds of incomplete or wrong AI answers will create more situations like that where we’ll be blamed for the AI’s failure to summarize or synthesize something correctly.

google gemini ask this video answer 6

Credit: Rita El Khoury / Android Authority

The final example is the one where Gemini veered off-track. We asked it about the best analog options among my 10 favorite watch faces for the Pixel and Galaxy Watch and it returned three options. Only one — Nothing Fancy — is correct. Sport XR is a digital watch face and I even say that in the video when I introduce it. Material Stack is also a digital design, though I don’t mention it explicitly. Meanwhile, Gemini failed to find the option that is simply and obviously called “Analogue watch face.” It also missed “Typograph,” another watch face that I explicitly mention as having an analog design.

Let’s face it, though, this is not as dire as those terrible AI results in Google search, but if this kind of simple error can occur with watch faces, then who’s to say what can happen with more nuanced and complicated videos?

We kept our focus on tech in these early tests, but there’s a bit of everything on YouTube, from politics to social issues, cooking tutorials, sports highlights, and more. Even though Google has this ever-present “Gemini may display inaccurate info, including about people, so double-check its responses” notice at the bottom of the pop-up, we all know that most people will eventually just rely on the answer they’re getting. Errors in answers can be very detrimental, both to the viewer and the video creator, as more and more people start relying on Gemini and trusting it with their everyday queries.

Personally, I’m not a fan of this “move fast, break things, and ask for forgiveness later” approach with AI. I would have preferred if Google tested it more and waited for it to mature before throwing it out in the world. But investors and money speak, not users like you and me, so once again, this is another discussion for another day.

Pixel 8 and 8a could get this Gemini Nano toggle very soon (APK teardown)

Google Gemini logo on smartphone stock photo (1)

Credit: Edgar Cervantes / Android Authority

  • An APK teardown of Google’s AICore app shows the expected toggle for activating Gemini Nano features on the Pixel 8 and Pixel 8a.
  • Google said Nano would come to these phones through Developer Options, so this suggests that the launch is imminent.
  • However, it’s also possible this would allow users to opt out of Nano on the Pixel 8 Pro, too.


There are many versions of Google’s Gemini. Only one of them actually works using the hardware built into smartphones, though, which is Gemini Nano. So far, Nano support only exists on a handful of phones. Notably, this includes just one Pixel: the Google Pixel 8 Pro. When Google confirmed that Nano support wouldn’t come to the Pixel 8, there was a slight uproar — so much so that Google backtracked and agreed to bring Nano support to the Pixel 8 and, even better, the Pixel 8a, too.

The caveat, though, is that Google is going to force users to manually enable Nano support on the Pixel 8 and 8a through a toggle in developer options, while Pixel 8 Pro users already have the features automatically enabled. This means that the vast majority of Pixel 8/8a users won’t use Gemini Nano features because so few of them will know it’s even an option. Now, thanks to an APK teardown of the recent AICore app from Google, we can see the supposed toggle for this feature, suggesting an imminent launch. Interestingly, it might also mean more control of the feature for Pixel 8 Pro users, too.

First, let’s show you what we found. In the screenshot below, you can see the two toggles we expect to appear in Settings > Developer Options > AICore Settings on the Pixel 8 and Pixel 8a. The first toggle gives permission for AICore to use as many resources as possible (which you will almost certainly want to leave activated), and the second actually turns on Nano.

Gemini AICore Toggle Leak

Credit: C. Scott Brown / Android Authority

We can’t say anything for certain until Google actually announces this, but we assume both of these toggles will be “off” by default. That’s how Google described it, so that’s what we’re going with for now.

In other words, this is the order things should go:

  • Google announces a Feature Drop that brings Nano support to the Pixel 8 and 8a
  • Pixel 8 and 8a users will need to activate Developer Options
  • In Developer Options, you’ll need to activate the second toggle in the screenshot above and, optionally, the first one

Theoretically, once you do those steps, you should be able to use Gemini Nano features on your Pixel 8 and Pixel 8a.

The very fact that the AICore app has this means we should expect Google to announce Nano support for the Pixel 8/8a very soon, possibly in just weeks or even days.

What about the Pixel 8 Pro?

One of the interesting side-effects of this toggle’s upcoming existence on the Pixel 8 and Pixel 8a is that it might also come to the Pixel 8 Pro. This would, in theory, allow Pixel 8 Pro users to disable Gemini Nano — something that’s currently not possible. As mentioned earlier, Nano support is enabled by default on the Pixel 8 Pro already, and without a toggle like this, there’s no way to turn it off.

Obviously, most people wouldn’t feel the need to disable Nano support, but it is possible that this toggle could give those folks the option. Just like with Pixel 8 and 8a users turning the feature on, Pixel 8 Pro users could follow the same steps to turn it off.

Samsung already allows users to disable/enable specific AI features through its Galaxy AI interface, which is built right into Android settings. Unfortunately, this toggle buried in Developer Options on Pixels wouldn’t be nearly as convenient, but at least it would give users more control, which is almost always a good thing.

Here are the commands you can use with Gemini’s upcoming YouTube Music extension

YouTube Music logo on smartphone, next to headphones and Nest Mini (4)
Credit: Edgar Cervantes / Android Authority
  • Google has published official documentation of Gemini’s upcoming YouTube Music extension.
  • The documentation gives examples of prompts and prompt formats that you can use to create more prompts.
  • The extension is not currently live within Gemini but could launch soon.

We’re expecting some big AI announcements in the coming days, as both OpenAI and Google have events scheduled this week. While ChatGPT could become more of a search engine, Google could be looking to integrate more of Gemini into its ecosystem. Google has been testing a Gemini Extension for YouTube Music, which we have previously detailed with a demo video. Now, we have a better idea of its functionality with a list of commands that Gemini will accept for YouTube Music.

Android Authority contributor Assemble Debug spotted that Google has added official documentation on connecting YouTube Music to Gemini apps. The documentation states that YouTube Music isn’t available in Gemini in Google Messages and that the extension works with English prompts only for now.

Here’s Gemini’s upcoming YouTube Music extension in action

YouTube Music logo on smartphone, next to headphones and Nest Mini (2)
Credit: Edgar Cervantes / Android Authority
  • We’ve managed to activate YouTube Music as an Extension in the Gemini Android app. This demo shows how the extension will work within Gemini.
  • The YouTube Music Gemini Extension allows Gemini to access your YouTube Music information to provide better search results.
  • The extension is not currently live within Gemini but could launch soon.

The race for AI has been underway for years, but ChatGPT pushed the industry from a marathon to a sprint. ChatGPT’s arrival caught Google somewhat off-guard, and the company has since doubled down on its AI efforts under the Gemini rebrand. One way Gemini could reach critical mass is to attract more users who are already present within the Google ecosystem, and Gemini Extensions could achieve this with tighter ecosystem integration. Google has been testing a Gemini Extension for YouTube Music that we have detailed previously, and we now have a better demo video to share with you on how it will work when it rolls out.

Android Authority contributor Assemble Debug managed to activate a new Gemini Extension for YouTube Music in the Google app v15.17.28.29.arm64. With some more work, we got the extension working well enough for a demo.

💾

Here’s Gemini’s upcoming YouTube Music extension in action

YouTube Music logo on smartphone, next to headphones and Nest Mini (2)
Credit: Edgar Cervantes / Android Authority
  • We’ve managed to activate YouTube Music as an Extension in the Gemini Android app. This demo shows how the extension will work within Gemini.
  • The YouTube Music Gemini Extension allows Gemini to access your YouTube Music information to provide better search results.
  • The extension is not currently live within Gemini but could launch soon.

The race for AI has been underway for years, but ChatGPT pushed the industry from a marathon to a sprint. ChatGPT’s arrival caught Google somewhat off-guard, and the company has since doubled down on its AI efforts under the Gemini rebrand. One way Gemini could reach critical mass is to attract more users who are already present within the Google ecosystem, and Gemini Extensions could achieve this with tighter ecosystem integration. Google has been testing a Gemini Extension for YouTube Music that we have detailed previously, and we now have a better demo video to share with you on how it will work when it rolls out.

Android Authority contributor Assemble Debug managed to activate a new Gemini Extension for YouTube Music in the Google app v15.17.28.29.arm64. With some more work, we got the extension working well enough for a demo.

💾

Here’s Gemini’s upcoming YouTube Music extension in action

YouTube Music logo on smartphone, next to headphones and Nest Mini (2)
Credit: Edgar Cervantes / Android Authority
  • We’ve managed to activate YouTube Music as an Extension in the Gemini Android app. This demo shows how the extension will work within Gemini.
  • The YouTube Music Gemini Extension allows Gemini to access your YouTube Music information to provide better search results.
  • The extension is not currently live within Gemini but could launch soon.

The race for AI has been underway for years, but ChatGPT pushed the industry from a marathon to a sprint. ChatGPT’s arrival caught Google somewhat off-guard, and the company has since doubled down on its AI efforts under the Gemini rebrand. One way Gemini could reach critical mass is to attract more users who are already present within the Google ecosystem, and Gemini Extensions could achieve this with tighter ecosystem integration. Google has been testing a Gemini Extension for YouTube Music that we have detailed previously, and we now have a better demo video to share with you on how it will work when it rolls out.

Android Authority contributor Assemble Debug managed to activate a new Gemini Extension for YouTube Music in the Google app v15.17.28.29.arm64. With some more work, we got the extension working well enough for a demo.

💾

You don’t even need Gemini open to use Gemini in Chrome for the web

Google Gemini logo on smartphone stock photo (2)
Credit: Edgar Cervantes / Android Authority
  • You can now launch Gemini in Chrome through its address bar. Type @gemini followed by your prompt to quickly get your response.
  • Gemini’s mobile app is now also rolling out to more languages and countries.
  • Extensions are also rolling out to all the languages and countries that Gemini supports.

Gemini is Google’s next big AI product, taking over the reins from Google Assistant. There’s still a long way to go for Gemini to mature, but Google is quickly marching towards there with iterative updates to Gemini. As part of today’s update announcements, Google is rolling out Gemini features to more languages and countries and is making it even easier to access Gemini on Google Chrome.

Launch Gemini in Chrome with @gemini

Chrome’s address bar works as a conventional address bar but also doubles as a quick search box. Now, it will play another role as a quick launcher for Gemini. You can now type @gemini followed by your prompt in the Chrome address to automatically load Gemini’s web app with your response ready to go.

The Gemini app is getting a speed boost with ‘real-time responses’

Google Gemini logo on smartphone stock photo (7)

Credit: Edgar Cervantes / Android Authority
  • Google’s Gemini app for Android could soon gain a ‘real-time responses’ option.
  • This will allow you to read a response as it’s being generated instead of waiting for the entire response to be generated first.
  • This would be in line with the web version of Gemini.

The Gemini assistant didn’t exactly enjoy a smooth launch, but Google has been working to improve the chatbot since then. The latest addition to the service could be faster responses for the Android app, according to a new report.

Gemini assistant could finally support music services like Spotify and others

Google Gemini logo on smartphone stock photo (6)

Credit: Edgar Cervantes / Android Authority

  • Gemini assistant could get a feature that will allow it to integrate with third-party music streaming services.
  • Users will be able to pick their preferred music service to play music.
  • The feature was found within Gemini Settings.


Gemini assistant has been a great substitute for Google Assistant on many fronts. However, the LLM still has some limitations, like being unable to identify songs or work with third-party music services like Spotify. But Google could soon give Gemini a feature that would drop that limitation.

Google appears to be preparing to give its chatbot a new music-related feature. According to a tip from AssembleDebug provided to PiunikaWeb, Gemini may soon get a “Music” option that allows the user to “select preferred services used to play music.” This feature was discovered within the Gemini Settings page.

In the images below, you can see the feature appears as the second to last option in the list. When you tap on Music, you’re taken to a page where you can “Choose your default media provider.” This page appears empty at the moment with no services listed.

The feature hints that users will soon be able to pick a service that Gemini can integrate with. Once integrated, this would open up the ability to get Gemini to play music with voice commands.

It’s unknown if Google will eventually expand this function to other types of media, like audiobooks or podcasts. Furthermore, there is no information on when the company plans to roll this feature out. But when and if it does, it should be a win for music enthusiasts.

Pixel 8 left behind: Google reveals Gemini Nano won’t come to base model

google pixel 8 rear hazel 5
Credit: Rita El Khoury / Android Authority
  • It was revealed on #TheAndroidShow that Gemini Nano is not coming to Pixel 8.
  • It appears hardware limitations are to blame.
  • Gemini Nano will come to more high-end devices in the near future.

The Pixel 8 Pro has Gemini Nano, leaving Pixel 8 owners wondering when the on-device AI will come to the base model. It’s looking like the answer to that question is it won’t.

Today, Google pushed out the latest episode of the #TheAndroidShow, where it discussed a number of topics including MWC, Android 15, and Gemini Nano. When the show got to the Q&A portion, however, there was an interesting revelation.

Gemini app finally gets a productivity feature it was sorely missing

A user views the Gemini app icon on their Samsung S22 Ultra.
Credit: Kaitlyn Cimino / Android Authority
  • Gemini is now letting users set reminders through Google Assistant integration and Google Tasks.
  • The functionality appears to be slowly rolling out in the US.

There are a few things holding Gemini back from being a true replacement for Google Assistant on Android. This includes absent features like routines, media service provider support, and more. But it looks like Gemini is now getting at least one of the functions it was sorely missing.

According to 9to5Google, the Gemini app on Android now has the ability to set reminders with the help of Google Assistant integration and Google Tasks. This means the app can now accept commands like “remind me to turn off lights tonight” or “remind me to grab the mail tomorrow morning.” Whereas before, you would be greeted by an error message or fake acknowledgment when attempting to set reminders.

Hands on with Google’s Gemini app: You can’t have your cake and eat it too

A user views the Gemini app icon on their Samsung S22 Ultra.
Credit: Kaitlyn Cimino / Android Authority

Formerly known as Google Bard, Google’s AI chatbot boasts a new name and a host of new capabilities. More significantly, a new Gemini app can now replace Google Assistant on your Android phone, assuming your phone is set to US English. I won’t spend too much time diving into the confusing nuances of Google’s AI strategy (or seemingly lack thereof). Instead, I’ve left that scrambled chaos to more eloquent voices, and simply go hands-on with the Gemini app.

According to Google

The Gemini app responds to a prompt asking what the differences are between Google Assistant and the Gemini app.

Gemini is rolling out to Google One AI Premium subscribers, here’s what you can use it for

Gemini Google Logo
Credit: Google
  • Gemini is rolling out to all users who subscribe to the Google One AI Premium plan, allowing them to use AI across Gmail, Docs, Slides, Sheets, and Meet apps.
  • Google is also launching a new “Chat with Gemini” feature for Gemini for Workspace.
  • Gemini for Google Workspace is also getting Business and Enterprise tiers, expanding access.

Google’s massive rebranding campaign turned all its AI products into Google Gemini. The rebranding is currently in progress, and starting today, Duet AI for Google Workspace will transition into Gemini for Google Workspace. As part of the move, Google has released a standalone “chat with Gemini” experience for Workspace customers while opening up Gemini access to all consumers through the Google One AI Premium subscription.

Gemini through Google One AI Premium across Google apps

Starting today, Google One subscribers who subscribe to the new $19.99 per month Google One AI Premium plan can access Gemini across their personal Gmail, Docs, Slides, Sheets, and Meet apps. This is rolling out to English language users in more than 150 countries.

The Google Assistant Android app is now Gemini by default

Google's Gemini app open with a greeting from the new AI assistant.

Credit: Kaitlyn Cimino / Android Authority

  • Google is now turning Assistant into Gemini by default.
  • When you download the Assistant Android app, you get Gemini instead, complete with a different app icon.
  • You’ll have to manually switch back to Assistant to continue using the digital assistant.


Google introduced the Gemini Android app recently. It gives Android users an option to switch from Google Assistant to Gemini and make it the default AI helper on their phones. However, it looks like Google is getting ready to phase out Assistant completely and replace it entirely with Gemini.

If you don’t already have the Google Assistant app on Android and download it now from the Play Store, you get Gemini AI by default. Even the app icon says Gemini instead of Google Assistant. What’s strange is that if you also downloaded the standalone Gemini Android app, you’ll now see two instances of Gemini on your apps list.

Gemini AI replacing Google Assistant by default

Credit: Adamya Sharma / Android Authority

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill

Enlarge (credit: Google)

Google went ahead with plans to launch Gemini for Workspace today. The big news is the pricing information, and you can see the Workspace pricing page is new, with every plan offering a "Gemini add-on." Google's old AI-for-Business plan, "Duet AI for Google Workspace," is dead, though it never really launched anyway.

Google has a blog post explaining the changes. Google Workspace starts at $6 per user per month for the "Starter" package, and the AI "Add-on," as Google is calling it, is an extra $20 monthly cost per user (all of these prices require an annual commitment). That is a massive price increase over the normal Workspace bill, but AI processing is expensive. Google says this business package will get you "Help me write in Docs and Gmail, Enhanced Smart Fill in Sheets and image generation in Slides." It also includes the "1.0 Ultra" model for the Gemini chatbot—there's a full feature list here. This $20 plan is subject to a usage limit for Gemini AI features of "1,000 times per month."

Gemini for Google Workspace represents a total rebrand of the AI business product and some amount of consistency across Google's hard-to-follow, constantly changing AI branding. Duet AI never really launched to the general public. The product, announced in August, only ever had a "Try" link that led to a survey, and after filling it out, Google would presumably contact some businesses and allow them to pay for Duet AI. Gemini Business now has a checkout page, and any Workspace business customer can buy the product today with just a few clicks.

Read 1 remaining paragraphs | Comments

Google goes “open AI” with Gemma, a free, open-weights chatbot family

The Google Gemma logo

Enlarge (credit: Google)

On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It's Google's first significant open large language model (LLM) release since OpenAI's ChatGPT started a frenzy for AI chatbots in 2022.

Gemma models come in two sizes: Gemma 2B (2 billion parameters) and Gemma 7B (7 billion parameters), each available in pre-trained and instruction-tuned variants. In AI, parameters are values in a neural network that determine AI model behavior, and weights are a subset of these parameters stored in a file.

Developed by Google DeepMind and other Google AI teams, Gemma pulls from techniques learned during the development of Gemini, which is the family name for Google's most capable (public-facing) commercial LLMs, including the ones that power its Gemini AI assistant. Google says the name comes from the Latin gemma, which means "precious stone."

Read 5 remaining paragraphs | Comments

Following Gemini, meet Google’s latest AI models: Gemma

Generative AI development is incredibly fast-paced. Just a few months after Google first announced its next-generation Gemini model, the company recently went public with the next version of its generative model, Gemini 1.5. Just as all eyes are focused on Gemini and its big GPT competition from OpenAI, Google has announced yet another AI advancement. The company has revealed Gemma, its new family of open models developed by DeepMind and other divisions.

The Google Assistant Android app is now Gemini by default

Google's Gemini app open with a greeting from the new AI assistant.

Credit: Kaitlyn Cimino / Android Authority
  • Google is now turning Assistant into Gemini by default.
  • When you download the Assistant Android app, you get Gemini instead, complete with a different app icon.
  • You’ll have to manually switch back to Assistant to continue using the digital assistant.

Google introduced the Gemini Android app recently. It gives Android users an option to switch from Google Assistant to Gemini and make it the default AI helper on their phones. However, it looks like Google is getting ready to phase out Assistant completely and replace it entirely with Gemini.

Google plans “Gemini Business” AI for Workspace users

The Google Gemini logo.

Enlarge / The Google Gemini logo. (credit: Google)

One of Google's most lucrative businesses consists of packaging its free consumer apps with a few custom features and extra security and then selling them to companies. That's usually called "Google Workspace," and today it offers email, calendar, docs, storage, and video chat. Soon, it sounds like Google is gearing up to offer an AI chatbot for businesses. Google's latest chatbot is called "Gemini" (it used to be "Bard"), and the latest early patch notes spotted by Dylan Roussei of 9to5Google and TestingCatalog.eth show descriptions for new "Gemini Business" and "Gemini Enterprise" products.

The patch notes say that Workspace customers will get "enterprise-grade data protections" and Gemini settings in the Google Workspace Admin console and that Workspace users can "use Gemini confidently at work" while "trusting that your conversations aren't used to train Gemini models."

These "early patch notes" for Bard/Gemini have been a thing for a while now. Apparently, some people have ways of making the site spit out early patch notes, and in this case, they were independently confirmed by two different people. I'm not sure the date (scheduled for February 21) is trustworthy, though.

Read 3 remaining paragraphs | Comments

❌