Illustration for: Google leases 600,000 TPUs, Anthropic deal adds billions to revenue
LLMs & Generative AI

Google leases 600,000 TPUs, Anthropic deal adds billions to revenue

2 min read

Google now has half‑a‑million of its custom Tensor Processing Units out on lease, a scale that would have seemed impossible a few years ago. While the company still runs a handful of these chips internally, the bulk are offered through standard Google Cloud agreements, letting developers tap the hardware without building their own racks. The move coincides with a multi‑year pact with Anthropic, a startup that positions itself as a direct challenger to OpenAI.

That contract promises to pour billions into Google’s earnings and ties Anthropic’s future models to the same cloud infrastructure. In practice, the arrangement nudges a major AI competitor into Google’s fold, undercutting the long‑standing advantage Nvidia’s CUDA platform has enjoyed. The shift hints at a broader rebalancing of AI‑compute economics, where the cost and accessibility of specialized hardware could reshape which firms dominate large‑scale model training.

Advertisement

The remaining 600,000 chips are leased through traditional Google Cloud contracts. Anthropic's commitment adds billions of dollars to Google's bottom line and locks one of OpenAI's key competitors into Google's ecosystem. Eroding the "CUDA moat" For years, Nvidia's GPUs have been the clear market leader in AI infrastructure.

In addition to its powerful hardware, Nvidia's CUDA ecosystem features a vast library of optimized kernels and frameworks. Combined with broad developer familiarity and a huge installed base, enterprises gradually became locked into the "CUDA moat," a structural barrier that made it impractically expensive to abandon a GPU-based infrastructure.

Related Topics: #Google #TPU #Anthropic #OpenAI #Nvidia #CUDA #Google Cloud #Tensor Processing Units #TPUv7

The lease of 600,000 TPUv7 chips marks a tangible shift in how frontier models are built. Gemini 3 and Claude 4.5 Opus, trained on Google’s Ironwood‑based hardware, prove that a non‑GPU stack can handle the most demanding workloads. Yet, whether this will redraw the economics of large‑scale AI remains uncertain.

Anthropic’s commitment, which adds billions to Google’s revenue, also ties a major OpenAI rival to the company’s ecosystem, tightening the competitive landscape. The move erodes Nvidia’s long‑standing “CUDA moat,” but the extent of that erosion is still unclear. If other developers follow suit, cloud‑based TPU leasing could become a viable alternative to traditional GPU clusters, potentially reshaping cost structures and hardware choices.

Conversely, the entrenched GPU supply chain and existing software tooling may temper the speed of any broader transition. In short, Google’s aggressive TPU deployment signals a credible challenge to GPU dominance, but the ultimate impact on the AI hardware market will depend on adoption rates, performance parity, and the willingness of the broader developer community to adjust their workflows.

Further Reading

Common Questions Answered

How many Tensor Processing Units (TPUs) has Google leased and through what type of contracts?

Google has leased 600,000 TPUv7 chips, offering them via standard Google Cloud agreements that let developers access the hardware without building their own racks. This leasing model complements the handful of TPUs Google still runs internally.

What impact does the Anthropic multi‑year deal have on Google's revenue and competitive position?

The Anthropic agreement is projected to add billions of dollars to Google's bottom line and ties a major OpenAI rival to Google's ecosystem. By locking Anthropic into Google Cloud, the deal also erodes Nvidia's CUDA moat and strengthens Google's AI infrastructure foothold.

Which frontier models were trained on Google's Ironwood‑based hardware, and what does this demonstrate?

Gemini 3 and Claude 4.5 Opus were trained on the Ironwood‑based TPU stack, showing that a non‑GPU architecture can handle the most demanding AI workloads. This success suggests that large‑scale model training can be competitive without relying on Nvidia GPUs.

How does the leasing of 600,000 TPUs shift the economics of large‑scale AI model training?

By providing TPUs through Google Cloud leases, developers can avoid the capital expense of building their own hardware racks, potentially lowering entry costs for AI research. However, the article notes that it remains uncertain whether this will fundamentally redraw the economics of large‑scale AI.

Advertisement