Tech
Android 17’s Gemini Intelligence is finally making mobile AI useful
Modern smartphone launches are frequently less about hardware and more about what the new phones can do with AI – the problem is that most of them are, well, gimmicky.
When was the last time you used Apple’s Image Playground? Or added a doodle to your photo using Samsung’s Galaxy AI? Have you ever actually used the Pixel’s Camera Coach feature when snapping a photo? What about Honor’s Magic Portal? Nope, I didn’t think so.
Don’t misunderstand me; these are all fantastic showcases of what AI can do. The problem is that they’re not really solving any problems. They’re something you can show off to your mates once or twice, but will you actually use them in day-to-day life? For the vast majority of the ‘killer’ AI features, I’d argue not.
But, the new Gemini Intelligence features headed to phones in Android 17? Well, that could be a different matter entirely.
Fine, they’re not all bad
Okay, yes, I am being a bit harsh there because there are a few AI-based features that I tend to gravitate towards on my phone.
The main one is, of course, Gemini itself; the virtual assistant is pretty handy for quick random thought queries, but more so for me, it’s great at extracting information like briefings and launch dates from emails and adding them to my Google Calendar – a task that used to take quite a while during busy periods.
And I’ll throw Circle to Search into the mix there because, well, it works perfectly most of the time. Circle something on screen, and you’ll be able to find that thing on Google. It’s great for finding niche products, whether that’s an obscure bit of tech or an outfit you like, without all the usual legwork of Google searches.
But those features are cloud-based and can be used on pretty much any phone at any price point. What I’m talking about are the AI features that are often locked to the latest flagship hardware.
And when it comes to those manufacturer-specific features, that list shrinks considerably. I’ve used pretty much every brand’s suite of AI tools, including those from Apple, Google, Samsung, Oppo, Honor and Xiaomi, and very few actually made their way into my daily life.
I must say that the Samsung Galaxy S26 Ultra’s image editing capabilities are so far ahead of the competition, not only in terms of removing objects without making the picture look ‘odd’, but also in how you can remix the entire look of a photo in One UI 8.5. It makes taking clean shots, especially at busy events, a little easier – but even then, it’s a feature I’ve only used a small handful of times.
It’s safe to say that I’ve yet to be wowed by mobile AI so far, but the upgraded Gemini Intelligence coming in Android 17 may finally change my mind.
Android 17’s Gemini Intelligence is a big deal
Revealed at Google’s mobile-focused Android Show: I/O Edition earlier this week, Gemini Intelligence is essentially the big feature of Android 17 coming later this year. And while most smartphone manufacturers want you to believe that their latest AI features will totally change the way you use your phone, where Google is concerned, I’m inclined to actually believe it.
Stay with me here because, while I haven’t been drinking Google’s KoolAid, there’s a lot to like about Gemini Intelligence – on paper, at least.
That starts with a major upgrade to how Gemini itself operates, enabling the virtual assistant to handle multi-app autonomous tasks. According to Google, it’ll allow you to, say, take a photo of a concert flyer, and get Gemini to find local hotels on Expedia on that date. It’ll then extract all the relevant information from the photo, including location and date, go to Expedia, find the best room for the price, and add all your information – all you need to do is tap confirm.
Now this is most certainly a very polished example that shows off all the new smarts, but it could also mean much simpler (yet still handy) tasks like getting Gemini to search for cheap flights for a trip on Skyscanner, scouring TikTok for the latest trends or just about anything else you can think of.
In theory, anyway. It’s not yet clear whether apps will need to add support or whether Gemini is indeed smart enough to be able to understand what’s on screen, regardless of what it’s looking at – but if it’s as it sounds, it could be a huge upgrade.
Tied to that is Gemini’s ability to autonomously browse the web in Chrome. Accessible from within the app itself, a new Gemini tab will allow you to ask the assistant to, say, search the web for information related to a specific niche, or to find that rare Pokémon card you’ve been on the hunt for.
I’m also curious to see how the new voice-to-text feature, Rambler, works. The idea is solid; it uses Gemini to analyse what you’re saying, cut out all the ums and ahs, and it’ll even rewrite everything to sound a little more polished. You can even change your mind mid-sentence and Rambler will correct everything, rather than spitting out a rambling monologue like the current voice-to-text on Android does.
I love the idea of using TTS more often, especially for replying to emails on the go, but the amount of editing I’m left with usually means it’s faster to just type it out. The hope is that Rambler could actually change that for me.
The most out-there addition is Create Your Widget, and it does what it says on the tin: it uses Gemini to create entirely custom widgets tailored to your needs. It could be basic custom timers to hyper-specific widgets like one that grabs information about events in your area happening soon from the web.
There are going to be limitations, of course – will it be able to pull in data from third-party apps on your phone? I doubt it. But it could save you from diving into apps or Chrome to find something you need to know often, and that alone is more helpful than most of the AI tools we currently see on phones.
Of course, this all depends on real-world performance – something nobody outside of Google has yet to test in person – so I’m keeping my expectations in check for now. But I certainly can’t wait to give the tools a go once they are released.
But what about availability?
The big question right now is, which devices will get the full Gemini Intelligence suite? And the answer is about as clear as mud.
Google claims that Gemini Intelligence will start rolling out to Pixel and Samsung Galaxy phones this summer, likely in time for the Android 17 upgrade, but which phones will it actually include? Will it be limited to the Pixel 11 collection and Samsung’s upcoming foldables? Both are rumoured for launch this summer, after all.
Or will older Pixels and Samsung phones get the upgrade once they get the Android 17 update?
It’s also possible that it won’t be directly tied to the Android 17 rollout; Google has already confirmed that the auto-browse tech is coming to Chrome in June, well ahead of Android 17’s release.
This suggests that it’d instead be tied to a Chrome app update rather than something OS-level. Could the same be true of features like Rambler? Could that appear in a Gboard app update instead? And if so, will that mean the millions of Android devices with the Chrome and Gboard apps automatically get the features, even if they’re not Pixel- or Samsung-branded?
There are a lot of questions right now and not many answers, but my hope is that the upgraded suite of tools will become as ubiquitous as Gemini is on Android phones. Because, honestly, it could be the biggest mobile AI-focused upgrade yet.
You must be logged in to post a comment Login