AI is the horizontal tech that applies universally: Sundar Pichai
Google must expect a response from rivals OpenAI, xAI and Anthropic. Adobe and Canva might be worried, with Google's Veo, Imagen and AI filmmaking tool, Flow.
Ten years ago, in August, when Sundar Pichai took over as CEO of Google, few could have foreseen that we’d be living in an artificial intelligence (AI)-led world a decade later.

Did Pichai envision this transformation on the horizon? He doesn’t let us in on the secret, but there are hints. Google began investing in tensor processing units (TPUs)—crucial to training AI models—a decade ago. They probably had an idea of what lay ahead.
“Part of what has always appealed to me about Google—and the founders wrote it in their letter—is that we will take a deep technology approach to drive progress in people's lives. It's why we invested in TPUs 10 years ago, before the current moment, and why we’ve been investing in quantum computing for the last decade. We will continue to do so until it becomes a reality,” Pichai said during the closing session of this year’s I/O conference on day two.
Also read: The beauty of AI-powered assistants is natural language: Google’s Angela Sun
That long-term vision is why Google has found itself in a position to reset benchmarks with its AI offerings—prompting responses from rivals including OpenAI, xAI, and Anthropic. It’s a reflection of how competitive the AI landscape has become. At this year’s Google I/O developer conference, the tech giant unveiled a bold vision for its Gemini model-led strategy.
This includes an “Agentic AI” vision for Gemini, which Google classifies as a universal AI assistant. Gemini competes with OpenAI’s GPT, Microsoft’s GPT-based Copilot, xAI’s Grok, and Anthropic’s Claude. None of these competitors currently offer the broad portfolio of intelligent capabilities that Google has now extended to Gemini and its creative models.

Google has made it clear that it has no intention of slowing down—it's the competition that must find ways to keep pace. Apple, for instance, is still working to piece together its Apple Intelligence suite, which has been present in its hardware since late last year. Earlier today, OpenAI announced it is acquiring io, a hardware company founded by former Apple design chief Jony Ive. The expectation is that OpenAI will now be better positioned to launch hardware products underpinned by its AI models.
Also read: NotebookLM using Gemini is a new class of AI software: Google’s Steven Johnson
“We’ve been innovating and shipping at a relentless pace,” says Pichai. In previous years, the I/O keynote featured the biggest software and developer-focused announcements. This year, however, the weeks leading up to the keynote saw a steady stream of releases—the Gemini 2.5 Pro preview, the coding agent AlphaEvolve, and significant updates to Gemini Live on Android devices, to name a few.
Among these innovations is Flow, an AI filmmaking tool that allows users to create cinematic clips, scenes, and stories using DeepMind’s most advanced models—Veo, Imagen, and Gemini. This is the kind of tool that could concern companies like Adobe and Canva, not just Google’s AI competitors.
“We’ve always taken a foundational, deep research approach, coupled with a science and technology strategy, but we’ve relentlessly worked to bring it into the hands of as many people as possible,” Pichai says. “Staying close to research” is one of the most fulfilling aspects of his role at Google. He highlights his proximity to Koray Kavukcuoglu’s team, which is driving innovation at a rapid pace. Kavukcuoglu is the CTO of Google DeepMind.
“We’re in a moment where AI, as a horizontal technology, applies across everything. We take a forward-looking view and invest accordingly. Some ideas transition quickly into products; others take time,” says Pichai. “What I’m proud of is the depth and breadth of the research we are undertaking at Google DeepMind and Google Research.”
Also read: Critical steps to unlock our vision for a universal AI assistant: Google DeepMind CEO Demis Hassabis
He discusses how Google is layering its core Search business with more powerful, Gemini-led AI. At I/O 2025, the company announced features like AI Mode in Search, Deep Research, and Search Live, which uses a phone’s camera for added context.
“We genuinely believe one of the value propositions in Search is that people use it to find what they’re looking for in the world. We are increasingly connecting them to that—with more context. And that’s the core experience. AI is going to enhance that in the same way. We think there’s deep value in the ecosystem and in a diversity of perspectives, and we want to bring that to users,” Pichai explains.
For Google, how users interact with its services remains critical—especially as interfaces evolve, with AI playing a larger role in Search, generative models, and creative workflows. These advancements are poised to gain further relevance with video model Veo 3 and imaging model Imagen 4.
Also read: Google building Gemini to be a proactive personal universal AI assistant
“We spoke about intelligence, personalization, and agents as the three frontiers we’re pushing forward with in the coming year. For consumers, this will improve user experience, reduce friction, save time, and free them to do other things,” he says. One example is the ability to try on clothes virtually—potentially with greater accuracy—by uploading a personal photo. It’s a time-saving feature for both avid and reluctant shoppers alike. And it’s part of how Search is evolving.