Tech

Pichai opens Cloud Next 2026 with $240B backlog, 750M Gemini users, and a plan to turn Search into an agent manager

Published

on

Summary: Sundar Pichai opened Cloud Next 2026 with Google Cloud at $70 billion in annual revenue, 48% growth, a $240 billion backlog that doubled in a year, and $175-185 billion in planned capital expenditure. The Gemini app has 750 million monthly users, AI Overviews reach two billion, and the Gemini API processed 85 billion requests in January alone. Pichai framed the conference around Search evolving from a retrieval engine into an “agent manager” and announced the Universal Commerce Protocol with Shopify, Target, and Walmart, while positioning Google’s full-stack integration from custom silicon to consumer distribution as the advantage competitors cannot replicate.

Sundar Pichai opened Google Cloud Next 2026 on Tuesday with a set of numbers that reframe the competitive dynamics of enterprise AI. Google Cloud is now generating more than $70 billion in annual revenue, growing at 48% year on year, with a backlog of $240 billion, up 55% and more than double the roughly $155 billion of a year ago. The number of billion-dollar deals Google Cloud signed in 2025 exceeded the combined total of the three previous years. Existing customers are outpacing their own commitments by 30%, spending faster than they contracted. Google has committed $175 billion to $185 billion in capital expenditure for 2026, nearly doubling the $91.4 billion it spent last year. Pichai described the moment as “a fundamental rewiring of technology and an accelerant of human ingenuity.” The money suggests he may not be exaggerating.

The keynote, titled “The Agentic Cloud,” was less a product launch than a thesis statement. Google is positioning itself not as a cloud provider that offers AI but as the operating system for what it calls the agentic enterprise: a model in which AI agents handle routine business operations autonomously, communicate with each other across platforms, and interact with the physical world through commerce, search, and real-time data. The pitch is that Google is the only company that controls every layer of that stack, from the custom silicon that runs inference, to the frontier models that power reasoning, to the cloud platform that hosts the agents, to the productivity suite and search engine through which three billion users interact with them.

The scale of the machine

The Gemini app has reached 750 million monthly active users as of the fourth quarter of 2025, up 100 million from the previous quarter. AI Overviews, Google’s AI-generated search summaries, reach two billion monthly users across more than 200 countries and drive 10% more search queries globally. AI Overviews now trigger on approximately 48% of all tracked queries, up from 31% in February 2025, a 58% increase in a year. The Gemini API processed 85 billion requests in January 2026, a 142% increase from 35 billion in March 2025. Eight million paid Gemini Enterprise seats are deployed across 2,800 companies. Thirteen million developers are building with Google’s generative models. Gemini 3 Pro has had, in Pichai’s words, “the fastest adoption of any model in our history.”

Advertisement

These are not cloud metrics. They are platform metrics. Google is arguing that its advantage over AWS, Azure, OpenAI, and Anthropic lies not in any single product but in the fact that it reaches more users, processes more queries, and touches more surfaces than any competitor. Search alone handles more than a billion shopping interactions per day. Workspace has more than three billion users. Android runs on billions of devices. The thesis is that when AI agents become the primary interface for work and commerce, the company with the largest existing surface area wins, because the agents need somewhere to run, something to connect to, and someone to serve.

Search becomes the agent manager

Pichai’s most consequential framing may have come in a podcast appearance earlier this month: “A lot of what are just information-seeking queries will be agentic in Search. You’ll be completing tasks. You’ll have many threads running.” He described Search evolving from a retrieval engine into an “agent manager,” an orchestration layer that dispatches AI agents to complete tasks on a user’s behalf rather than returning a list of links.

The infrastructure for this is already being built. Google announced the Universal Commerce Protocol at NRF in January, an open-source standard for agentic commerce co-developed with Shopify, Etsy, Wayfair, Target, and Walmart. More than 20 partners have endorsed it, including Adyen, American Express, Best Buy, Flipkart, Macy’s, Mastercard, Stripe, The Home Depot, Visa, and Zalando. UCP is built on REST and JSON-RPC transports with the Agent2Agent protocol, Model Context Protocol, and a new Agent Payments Protocol built in. It lets AI agents treat any participating store as a programmable service, with the merchant remaining the merchant of record. Pichai, who described himself as “an indecisive shopper,” said he is “looking forward to the day when agents can help me get from discovery to purchase.”

The implications for the advertising industry are significant. If Search shifts from showing links that users click to dispatching agents that complete purchases, the entire cost-per-click model that funds Google’s advertising business, and by extension the businesses of every company that advertises on Google, changes. Retailers are already deploying AI-powered shopping through Gemini, ChatGPT, and Copilot. The question is whether agentic commerce cannibalises Google’s own advertising revenue or whether Google can capture a larger share of the transaction itself. UCP suggests Google is betting on the latter.

The full-stack argument

The competitive positioning at Cloud Next was unusually direct. Thomas Kurian said competitors are “handing you the pieces, not the platform,” leaving enterprise teams to integrate components themselves. The claim rests on Google’s vertical integration: Ironwood TPUs and the forthcoming eighth-generation split into Broadcom-designed training chips and MediaTek-designed inference chips provide the silicon. Gemini 3 Pro, 3 Flash, and 3.1 Pro provide the models. The Gemini Enterprise Agent Platform, formerly Vertex AI, provides the developer tools and runtime. Workspace Studio provides the no-code agent builder. Search and Android provide the consumer distribution. No other company assembles all of these under one roof.

Advertisement

The argument has a specific target: Microsoft Copilot, which despite being embedded in virtually every Fortune 500 company has struggled with adoption. Only 3.3% of Microsoft 365 users with Copilot access actually pay for it, and its accuracy net promoter score deteriorated to negative 24.1 by September 2025. Google’s eight million paid Gemini Enterprise seats in roughly four months represents a faster trajectory, though from a much smaller base. GitHub has frozen new Copilot sign-ups because agentic coding sessions consume more compute than users pay for, illustrating why owning the silicon layer, as Google does, is not just a technical advantage but an economic one.

The capital question

The $175 billion to $185 billion in planned capital expenditure is the number that makes the rest of the strategy credible or alarming, depending on how the next two years unfold. Roughly 60% goes to servers and 40% to data centres and networking equipment. Combined with Microsoft, Meta, and Amazon, total big tech AI infrastructure spending is approaching $700 billion this year, a figure large enough to reshape energy markets and strain power grids. Pichai acknowledged on the fourth-quarter earnings call that the “top question is definitely around compute capacity and all the constraints, be it power, land, supply chain,” and expects Google to remain supply-constrained through 2026.

The backlog provides the justification. At $240 billion, it represents more than three years of current revenue contracted but not yet delivered. Thirteen product lines each generate more than $1 billion in annual revenue. The ServiceNow deal alone was worth $1.2 billion over five years. If the demand is real, and the backlog suggests it is, then the capital expenditure is not a gamble but an obligation: the cost of building the infrastructure to fulfil commitments already made.

Google Cloud holds roughly 11% of the cloud infrastructure market, behind AWS at 31% and Azure at 25%. The gap has narrowed: Google grew at 48% in the fourth quarter of 2025, the fastest of the three, and achieved sustained profitability for the first time. But the gap remains. What Pichai presented at Cloud Next is not a plan to close that gap through incremental cloud sales. It is a plan to redefine what the cloud is, from a place where companies store data and run workloads to a platform where AI agents perform work, make decisions, complete purchases, and coordinate with each other across organisational boundaries. If that transition happens, the company that built the agents, the models, the chips, the protocols, and the distribution channels stands to capture a share of the value that the current market share numbers do not reflect. That is the bet. Cloud Next 2026 is the moment Google made it explicit.

Advertisement

Source link

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Trending

Exit mobile version