The headlining occasion of Google I/O 2025, the dwell keynote, is formally within the rearview. Nevertheless, when you’ve adopted I/O earlier than, you could know there’s much more taking place behind the scenes than what you’ll find live-streamed on YouTube. There are demos, hands-on experiences, Q&A periods, and extra taking place at Shoreline Amphitheatre close to Google’s Mountain View headquarters.
We have recapped the Google I/O 2025 keynote, and given you hands-on scoops about Android XR glasses, Android Auto, and Mission Moohan. For these within the nitty-gritty demos and experiences taking place at I/O, listed here are 5 of my favourite issues I noticed on the annual developer convention at this time.
Controlling robots together with your voice utilizing Gemini
Google briefly talked about throughout its important keynote that its long-term objective for Gemini is to make it a “common AI assistant,” and robotics needs to be part of that. The corporate says that its Gemini Robotics division “teaches robots to understand, observe directions and modify on the fly.” I bought to check out Gemini Robotics myself, utilizing voice instructions to direct two robotic arms and transfer object hands-free.
The demo is utilizing a Gemini mannequin, a digicam, and two robotic arms to maneuver issues round. The multimodal capabilities — like a dwell digicam feed and microphone enter — make it straightforward to regulate Gemini robots with easy directions. In a single occasion, I requested the robotic to maneuver the yellow brick, and the arm did precisely that.
It felt responsive, though there have been some limitations. In a single occasion, I attempted to inform Gemini to maneuver the yellow piece the place it was earlier than, and shortly discovered that this model of the AI mannequin would not have a reminiscence. However contemplating Gemini Robotics remains to be an experiment, that is not precisely shocking.
I want Google would’ve centered a bit extra on these purposes throughout the keynote. Gemini Robotics is precisely the type of AI we must always need. There isn’t any want for AI to switch human creativity, like artwork or music, however there’s an abundance of potential for Gemini Robotics to get rid of the mundane work in our lives.
Making an attempt on garments utilizing Store with AI Mode
As somebody who refuses to attempt on garments in dressing rooms — and hates returning garments from on-line shops that do not match as anticipated simply as a lot — I used to be skeptical however excited by Google’s announcement of Store with AI Mode. It makes use of a customized picture technology mannequin that understands “how completely different supplies fold and stretch based on completely different our bodies.”
In different phrases, it ought to offer you an correct illustration of how garments will look on you, relatively than simply superimposing an outfit with augmented actuality (AR). I am a glasses-wearer that continuously tries on glasses nearly utilizing AR, hopeful that it will give me an concept of how they’re going to look on my face, solely to be disenchanted by the consequence.
I am completely satisfied to report that Store with AI Mode’s digital try-on expertise is nothing like that. It shortly takes a full-length photograph of your self and makes use of generative AI so as to add an outfit in a approach that appears shockingly lifelike. Within the gallery beneath, you’ll be able to see every a part of the method — the completed consequence, the advertising and marketing photograph for the outfit, and the unique image of me used for the edit.
Is it going to be good? Most likely not. With that in thoughts, this digital try-on instrument is definitely the perfect I’ve ever used. I might really feel rather more assured shopping for one thing on-line after attempting this instrument — particularly if it is an outfit I would not usually put on.
Creating an Android Bot of myself utilizing Google AI
Loads of demos at Google I/O are actually enjoyable, easy actions with rather a lot of technical stuff happening within the background. There isn’t any higher instance of that than Androidify, a instrument that turns a photograph of your self into an Android Bot. To get the consequence you see beneath, a posh Android app move used AI and picture processing. It is a glimpse of how an app developer would possibly use Google AI in their very own apps to supply new options and instruments.
Androidify begins with a picture of an individual, ideally a full-length photograph. Then, it analyses the picture and generates a textual content description of it utilizing the Firebase AI Logic SDK. From there, that description is shipped to a customized Imagen mannequin optimized particularly for creating Android Bots. Lastly, the picture is generated.
That is a bunch of AI processing to get from a real-life photograph to a customized Android Bot. It is a neat preview of how builders can use instruments like Imagen to supply new options, and the excellent news is that Androidify is open-source. You may be taught extra about all that goes into it right here.
Making music with Lyria 2
Music is not my favourite medium to include AI, however alas, the Lyria 2 demo station at Google I/O was fairly neat. For these unfamiliar, Lyria Realtime “leverages generative AI to provide a steady stream of music managed by consumer actions.” The thought is that builders can incorporate Lyria into their apps utilizing an API so as to add soundtracks to their apps.
On the demo station, I attempted a lifelike illustration of the Lyria API in motion. There have been three music management knobs, solely they have been as huge as chairs. You may sit down and spin the dial to regulate the proportion of influence every style had on the sound created. As you alter the genres and their prominence, the audio enjoying modified in actual time.
The cool half about Lyria Realtime is that, because the title suggests, there isn’t any delay. Customers can change the music technology instantly, giving individuals that are not musicians extra management over sound than ever earlier than.
Producing customized movies with Move and Veo
Lastly, I used Move — an AI filmmaking instrument — to create customized video clips utilizing Veo video-generation fashions. In comparison with primary video mills, Move is used to allow creators to have constant and seamless themes and kinds throughout clips. After making a clip, you’ll be able to change the video’s traits as “substances,” and use that as prompting materials to maintain producing.
I gave Veo 2 (I could not attempt Veo 3 as a result of it takes longer to generate) a difficult immediate: “generate a video of a Mets participant hitting a house run in comedian fashion.” In some methods, it missed the mark — one in all my movies had a participant with two heads and none of them really confirmed a house run being hit. However setting Veo’s struggles apart, it was clear that Move is a great tool.
The power to edit, splice, and add to AI-generated movies is nothing in need of a breakthrough for Google. The very nature of AI technology is that each creation is exclusive, and that is a nasty factor when you’re a storyteller utilizing a number of clips to create a cohesive work. With Move, Google appears to have solved that downside.
In the event you discovered AI speak throughout the principle keynote boring, I do not blame you. The phrase Gemini was spoken 95 occasions and AI was uttered barely fewer on 92 events. The cool factor about AI is not what it might probably do, however how it might probably change the way in which you full duties and work together together with your gadgets. Thus far, the demo experiences at Google I/O 2025 did a stable job at exhibiting the how to attendees on the occasion.