Google's Ironwood TPU to be generally available on Cloud in weeks
Starting in the next few weeks, Google will let anyone with a Google Cloud account spin up its newest AI-focused accelerator, the Ironwood TPU. No more waiting on a long list - you can just provision the hardware straight away. The chip comes from Google’s own silicon team, but it isn’t just for one product.
Companies that need huge compute power - think analytics firms crunching petabytes or a game studio training a next-gen model - have already been using Google’s custom processors. It looks like the same silicon that fuels Google’s internal research will soon be a regular option on the public cloud, meaning the tech behind Gemini, Imagen and Veo could become a shared resource.
TPUs are purpose-built chips for AI workloads. Google offers them to Cloud customers while also running Gemini, Imagen, Veo and other model families on the same hardware. Large-scale Cloud users have already taken advantage of TPUs for their own projects, so the ecosystem is already in place.
TPUs are chips that are specifically designed to handle AI workloads. Besides providing it for customers on Google Cloud, the company also uses it to train and deploy the Gemini, Imagen, Veo and other families of its AI models. Additionally, large-scale Google Cloud customers have also utilised TPUs for their AI workloads.
Anthropic, the company behind the Claude family of AI models, has long utilised TPUs via Google Cloud for its workloads and has recently expanded its partnership with Google to deploy over 1 million new TPUs. Indian multinational conglomerate Reliance recently unveiled its latest venture, Reliance Intelligence, which will use Google Cloud infrastructure running on TPUs "With Ironwood, we can scale up to 9,216 chips in a superpod linked with breakthrough Inter-Chip Interconnect (ICI) networking at 9.6 Tb/s," said Google in the announcement.
Will the promised speed-ups survive real-world use? Google says Ironwood, its seventh-generation TPU, will hit general availability on Cloud in a few weeks, so TPU v7 should start showing up in a wider mix of AI jobs. The chip is billed as delivering about ten times the peak performance of TPU v5 and roughly four times the performance-per-chip for both training and inference compared with TPU v6.
That means customers can now throw their own models at the new silicon while Google keeps running Gemini, Imagen, Veo and other internal families on it. The announcement, however, skips any hard numbers on cost efficiency or how the gains play out on different workloads. We’ve seen big Cloud users adopt earlier TPUs, but it’s unclear whether Ironwood will pull in new segments.
In practice, the real impact will hinge on how developers weave the hardware into existing pipelines and whether the headline claims turn into measurable productivity bumps. The rollout is right around the corner, but we’ll have to wait for solid results.
Further Reading
- Ironwood: The first Google TPU for the age of inference - The Keyword (Google Blog)
- Ironwood TPUs and new Axion-based VMs for your AI workloads - Google Cloud Blog
- TPU v7, Google's answer to Nvidia's Blackwell is nearly here - The Register
Common Questions Answered
When will Google's Ironwood TPU be generally available on Google Cloud?
Google announced that the Ironwood TPU will be generally available on Cloud within the coming weeks. This rollout will let developers and enterprises provision the hardware without needing to join a lengthy waitlist.
How does the performance of Ironwood TPU (TPU v7) compare to earlier TPU generations?
The Ironwood TPU, also known as TPU v7, claims a ten‑fold peak performance increase over TPU v5. It also delivers roughly four times the performance per chip for both training and inference when compared with TPU v6.
Which internal Google AI model families are run on the Ironwood TPU?
Google uses the Ironwood TPU to train and deploy its Gemini, Imagen, and Veo model families, among other internal AI models. These workloads benefit from the chip’s higher compute density and speed.
Which external partner has historically used Google Cloud TPUs and recently expanded its partnership?
Anthropic, the company behind the Claude series of AI models, has long utilized Google Cloud TPUs for its workloads. The partnership has recently been expanded, allowing Anthropic to leverage the new Ironwood TPU for its next‑generation models.