Google’s annual I/O developer conference for 2025 kicked off at the Shoreline Amphitheatre in Mountain View this week, showcasing major updates and announcements across its broad product line, including advances in Android, AI, hardware, and new platforms aimed at developers.
One of the key announcements this year is Gemini Ultra, Google’s premium offering aimed squarely at power users in the U.S. The plan costs $249.99 per month and provides subscribers with top-tier access to Google’s suite of AI-powered applications, including the Veo 3 video generator, Flow video editing app, and the forthcoming Gemini 2.5 Pro Deep Think mode. Gemini Ultra offers greater limits across platforms such as NotebookLM, Google’s Whisk image remixing app, access to a new Gemini integration in Chrome and advanced agent-based tools built on Project Mariner. Additional perks include YouTube Premium membership and 30TB of storage spread across Google Drive, Photos, and Gmail.
Gemini 2.5 Pro introduces a new feature called Deep Think. Google explained that in this mode, its AI model considers multiple possible responses or outcomes before delivering an answer, leading to increased accuracy on complex queries. Deep Think is currently being tested by select users via the Gemini API, with a broader launch pending rigorous safety evaluations.
On the generative AI front, Google unveiled the new Veo 3 model, capable of producing highly realistic video complete with auto-generated sound effects, ambient noises, and even dialogue to complement the visuals. Currently available exclusively to Gemini Ultra subscribers, Veo 3 represents a notable improvement in quality and versatility compared to its predecessor.
Similarly, Imagen 4, Google’s latest iteration of its popular AI-based image generator, brings significantly faster rendering speeds and higher-quality outputs. According to Google, Imagen 4 can accurately depict intricate details and textures, such as fur, water droplets, and fine fabrics, and creates images up to 2K in resolution in photorealistic and abstract styles. A faster variant expected to reach speeds 10 times quicker than Imagen 3 will soon be released.
Additionally, Google introduced Stitch, an AI-driven initiative to simplify the creation of user interfaces for web and mobile apps. Stitch allows developers to quickly translate basic prompts or visual references into full, working interface designs, complete with ready-to-use HTML and CSS.
Google’s Project Mariner—a sophisticated browser-based AI assistant announced late last year—has been significantly upgraded. The improved platform now allows users to perform complex online tasks, such as purchasing items or tickets, without directly visiting external websites.
Project Astra, Google’s ambitious multimodal AI initiative, aligns closely with Gemini, enabling near-instant visual interaction and streaming from users’ devices directly to Google’s AI systems, facilitating seamless real-time conversations. Google revealed collaborations with partners like Samsung and Warby Parker to integrate Project Astra into an upcoming line of high-tech glasses, although a specific launch timeline was not provided.
Another fascinating advancement unveiled at Google I/O is Beam, the company’s remarkable 3D telepresence system formerly codenamed Starline. Beam leverages a sophisticated combination of hardware and AI-driven software that constructs realistic hologram-like representations of individuals. The result is near-perfect, millimeter-scale head tracking and 3D virtual interactions through Google Meet, featuring real-time, translated speech maintaining original voice tones and emotional expressions. Google Meet itself is also introducing real-time speech translation independently.
A range of other notable updates was also introduced. Google’s Gemini apps now boast more than 400 million monthly active users, and new features for Gemini Live—including camera and screen-sharing functionalities powered by Project Astra—will become widely available shortly. Google previewed AI Mode for Search, a next-generation search assistant capable of handling complex queries and supporting real-time visual queries from smartphone cameras.
Google’s AI plans for Workspace further emphasize productivity, bringing personalized smart replies, inbox organizational tools to Gmail, and advanced media editing capabilities to Google Vids. Google’s NotebookLM platform gains video summarization features, while SynthID Detector, Google’s sophisticated watermark identification service, provides verification of AI-created content.
On the hardware side, Wear OS 6 was unveiled, integrating Google’s refined Material 3 Expressive design guidelines, allowing developers more customizable, intuitive interactions and fresh dynamic app themes linked to wearable devices’ watch faces. Google Play received new developer-focused tools, including subscription upgrades, enhanced topic navigation, integrated audio excerpts, smooth in-app checkout experiences, and streamlined app release management options for software developers.
Finally, Android Studio now offers advanced AI-powered coding assistance through its new Journeys and enhanced Agent Mode features, designed to proactively guide developers through complex workflows and troubleshooting scenarios. The AI-backed Crash Insights feature also provides real-time analytics of potential code failures and suggests quick fixes.
Overall, Google’s announcements at I/O 2025 mark significant advances in the company’s expansive AI ambitions, highlighting an increasing integration of intelligent technologies into virtually every facet of Google’s products, services, and developer toolkit.