Illustration for: Anthropic's deal backs Claude on Google TPUs, Amazon Trainium, Nvidia GPUs
Business & Startups

Anthropic's deal backs Claude on Google TPUs, Amazon Trainium, Nvidia GPUs

2 min read

When Anthropic closed its latest financing round, it also locked in three of the biggest compute platforms for its Claude models. Google’s Tensor Processing Units, Amazon’s Trainium chips and Nvidia’s GPUs will each run a slice of the startup’s workload, everything from big training jobs to low-latency inference and some off-the-cuff research. Spreading the work across rivals seems to help avoid bottlenecks and keep prices in check as the user base grows.

The company’s numbers are moving fast; recent reports suggest a revenue run rate that’s outpacing its compute spend. Investors are watching because this kind of multi-vendor setup could become a template for AI firms trying to juggle performance, cost and vendor lock-in.

---

The deal fits Anthropic's multi-cloud strategy, where Claude models run across Google TPUs, Amazon Trainium, and Nvidia GPUs, with specific platforms optimized for training, inference, and research. Anthropic's compute needs are rising with rapid revenue and user growth: annual revenue run rate is nearing $7 billion, Claude is used by 300,000+ businesses (300× in two years), large $100k+ customers are up ~7× year over year, and Claude Code hit a $500 million annualized run rate within two months of launch. Despite the TPU expansion, AWS remains Anthropic's primary cloud partner: Amazon has invested $8 billion (vs.

Google's $3 billion total to date), and Anthropic's Project Rainier supercomputer uses Trainium 2 to reduce cost per compute by avoiding premium chip margins. Microsoft's Mico is a 'Clippy' for the AI era Microsoft introduced Mico, a customizable animated avatar for Copilot that serves as a "face" for the AI assistant, debuting first in the U.S., Canada, and the U.K.

Related Topics: #AI #Anthropic #Claude #Google TPUs #Amazon Trainium #Nvidia GPUs #Project Rainier #Mico #Copilot

Anthropic’s new cloud deal puts Claude on Google TPUs, Amazon Trainium and Nvidia GPUs. By spreading work across three clouds the company can pick the right chip for training, inference or research, according to the announcement. Revenue and user counts are climbing fast, so Anthropic says its compute demand is rising, but it didn’t share the exact annual run rate.

At the same time OpenAI finished a restructuring that turned its operating arm into a public-benefit corporation while the nonprofit parent stays intact, and it just sealed another agreement with Microsoft. OpenAI also launched an AI-powered browser and teased products like ChatGPT Atlas and Copilot Mico. Both teams are scaling quickly, yet it’s unclear how the different cloud strategies will affect cost efficiency or long-term performance.

The deals involve huge sums - tens of billions of dollars for the Google-Anthropic pact - but we still don’t know how much developers or end users will feel the impact. I’ll be watching to see whether the promised compute tweaks turn into real-world gains.

Common Questions Answered

How does Anthropic's multi‑cloud strategy allocate Claude workloads across Google TPUs, Amazon Trainium chips, and Nvidia GPUs?

Anthropic assigns each hardware platform a specific role: Google TPUs handle large‑scale training runs, Amazon Trainium chips are used for low‑latency inference, and Nvidia GPUs support experimental research tasks. By matching workloads to the strengths of each platform, the company can optimize performance and cost efficiency.

What revenue and user‑growth figures did Anthropic disclose in its latest financing announcement?

The company reported an annual revenue run rate approaching $7 billion and said Claude is now used by more than 300,000 businesses—a 300‑fold increase over two years. Additionally, the number of customers spending over $100 k per year grew about seven times year‑over‑year.

Which Claude product reached a $500 million annualized run rate, and what does this signify for Anthropic's business?

Claude Code hit a $500 million annualized run rate, indicating strong market demand for the code‑generation capabilities of the model. This milestone underscores Anthropic's expanding revenue streams beyond its core conversational AI offerings.

Why did Anthropic decide to spread its compute across rival cloud providers instead of consolidating on a single platform?

Distributing workloads across Google, Amazon, and Nvidia helps Anthropic avoid potential bottlenecks and maintain competitive pricing as its user base expands. The multi‑cloud approach also lets the company leverage the unique strengths of each provider for training, inference, and research.