Skip to main content
Meta executives stand beside rows of glowing TPU servers in a data center, comparing them to NVIDIA GPUs on a screen.

Meta Eyes Google TPUs, Up to One Million Units, as NVIDIA Alternative

2 min read

Meta’s AI group seems to be eyeing Google’s tensor processing units as a possible fallback to NVIDIA’s silicon. Bloomberg says the social-media behemoth could end up ordering as many as one million TPUs - the same hardware Google has already promised to Anthropic in a separate deal. That kind of volume would sit on top of Meta’s own spending plans, which Bloomberg estimates will top $100 billion by 2026.

A switch like this would give Meta a non-NVIDIA route for training its biggest models, but the details are still hazy; neither company has confirmed a final pact. Still, the numbers point to a pretty big wager on Google’s custom chips, nudging them into the high-performance AI arena.

---

If the deal goes through, it could push TPUs up as a serious alternative for heavy-duty AI work. Google already signed a separate contract to supply up to one million TPUs to Anthropic. With Meta’s capex likely to exceed $100 billion in 2026, Bloomberg’s outlook suggests the stakes are high.

If finalised, the Meta-Google arrangement would bolster TPUs as a credible alternative in high-performance AI computing. Google has already signed a separate agreement to provide up to one million TPUs to Anthropic. With Meta's capital expenditure projected to exceed $100 billion in 2026, Bloomberg analysts estimate the company could spend $40-$50 billion next year on inferencing-chip capacity alone, potentially accelerating demand for Google Cloud services.

TPUs, designed more than a decade ago specifically for AI workloads, have gained traction as companies evaluate customised, power-efficient alternatives to traditional GPUs. While NVIDIA still commands the vast majority of the AI chip market and AMD remains a distant second, TPUs are emerging as a strong contender, especially as companies seek to mitigate reliance on a single dominant supplier.

Related Topics: #Meta #Google TPUs #NVIDIA #AI workloads #Anthropic #Bloomberg #high‑performance AI #Google Cloud #GPUs #AMD

Meta’s new direction could shake up where AI chips come from. If the talks end with a multi-billion-dollar spend, Google’s TPUs might leave the cloud-only world and show up in Meta’s data centres as early as 2027 - a move that would put pressure on NVIDIA’s long-standing lead. The deal is still very tentative; no terms have been made public, and it isn’t clear whether Meta would buy the hardware outright or rent it through Google Cloud starting next year.

With capital expenditure expected to top $100 billion in 2026, Meta certainly has the cash to make a splash, but the exact size of any TPU rollout hasn’t been spelled out. Google recently pledged up to one million TPUs for Anthropic, which suggests it can crank out volume, yet whether that capacity will also flow to Meta remains unconfirmed.

So, while a viable TPU alternative is gaining some momentum, the real impact on the high-performance AI computing market - and on NVIDIA’s dominance - is still up in the air. The facts are there; the final shape will hinge on negotiations that haven’t been closed yet.

Common Questions Answered

Why is Meta considering Google TPUs as an alternative to NVIDIA silicon?

Meta is evaluating Google TPUs because they could provide a high‑performance, scalable backbone for its AI workloads, potentially reducing reliance on NVIDIA. Bloomberg reports that Meta may tap up to one million TPUs, aligning with its projected $100 billion capital expenditure by 2026.

How many TPUs could Meta potentially acquire under the proposed deal, and how does this compare to Google's agreement with Anthropic?

Meta could procure as many as one million TPUs, matching the maximum number Google has already pledged to Anthropic in a separate agreement. This parity suggests Meta would have access to a comparable hardware pool for its own data‑center AI inference needs.

What portion of Meta's 2025 spending is expected to go toward inferencing‑chip capacity, and what impact might this have on Google Cloud services?

Bloomberg analysts estimate Meta could spend $40‑$50 billion on inferencing‑chip capacity in 2025 alone. Such a large investment would likely boost demand for Google Cloud services, especially if Meta opts for a rental model rather than outright purchases.

When could Google’s TPUs potentially be deployed in Meta’s data centres, and what would this mean for NVIDIA’s market position?

If the talks succeed, Google’s TPUs could be installed in Meta’s data centres as early as 2027. This shift would challenge NVIDIA’s long‑standing dominance in high‑performance AI hardware by introducing a credible cloud‑origin alternative.

What uncertainties remain about the Meta‑Google TPU arrangement, according to the article?

The article notes that no final terms have been disclosed, leaving it unclear whether Meta will purchase the TPUs outright or use a rental model via Google Cloud. Additionally, the overall multi‑billion‑dollar spend and timeline remain tentative.