OpenAI signs USD 10 bn deal to use Amazon Trainium3 chips, 4.4× faster compute
OpenAI has just inked a ten‑billion‑dollar agreement to run its models on Amazon’s newest Trainium3 silicon. The size of the contract alone signals a deepening reliance on custom hardware to keep pace with ever‑larger language models. While the partnership is still fresh, the numbers Amazon is putting on the table are striking: a claim of more than four‑fold speed gains and a comparable jump in efficiency over the previous generation.
That leap matters because OpenAI’s latest workloads demand not just raw throughput but also tighter energy budgets and faster memory access. It also follows a massive rollout of five lakh Trainium2 units in Project Rainier, a joint effort with Anthropic that the companies billed as the world’s largest AI compute cluster. In that context, the promised improvements of Trainium3 could reshape how quickly OpenAI can train and serve its next wave of models.
AWS says Trainium3 delivers up to 4.4 times more compute, four times better energy efficiency, and nearly four times higher memory bandwidth than Trainium2. Trainium3 follows AWS's deployment of five lakh Trainium2 chips in Project Rainier with Anthropic, described as the world's largest AI compute.
AWS says Trainium3 delivers up to 4.4 times more compute, four times better energy efficiency, and nearly four times higher memory bandwidth than Trainium2. Trainium3 follows AWS's deployment of five lakh Trainium2 chips in Project Rainier with Anthropic, described as the world's largest AI compute cluster. AWS also previewed Trainium4, expected to deliver at least six times higher FP4 performance, with further gains in FP8 performance and memory bandwidth.
In November, AWS and OpenAI announced a multi-year partnership worth $38 billion to run and scale OpenAI's core AI workloads on AWS infrastructure. Under that agreement, OpenAI will begin using AWS compute immediately, with all capacity targeted for deployment before the end of 2026 and additional expansion planned through 2027 and beyond. OpenAI's use of AWS Trainium chips could reduce its reliance on NVIDIA-based systems.
Will the partnership reshape costs? OpenAI has agreed to a $10 billion investment from Amazon, tying its next‑generation models to AWS’s Trainium3 accelerators. The chips promise up to 4.4 times more compute, fourfold energy efficiency gains and nearly fourfold memory bandwidth over the previous generation.
Yet the negotiations are described as “very fluid,” and details of the integration timeline remain unclear. If the deal proceeds, OpenAI could be valued at more than $500 billion, a figure cited by Bloomberg. AWS points to its earlier deployment of five lakh Trainium2 units in Project Rainier with Anthropic as evidence of scale.
However, whether the newer silicon will deliver the advertised performance in real‑world workloads hasn't been independently verified. The partnership also marks the first public link between OpenAI’s services and Amazon’s custom AI hardware. Uncertain whether the cost and energy advantages will translate into measurable benefits for end users, but the arrangement signals a notable shift in how leading AI firms source compute resources.
Further Reading
- Amazon's $10B+ OpenAI Investment: A Strategic Shift in AI Infrastructure and Cloud Computing - AInvest
- OpenAI and Amazon discuss $10 billion+ investment tied to AI chips and Trainium3 rollout - Cryptopolitan
- Amazon eyes 15 trillion won investment in OpenAI and offers ‘Trainium 3’ AI chips - Chosun Biz (English edition)
- OpenAI eyes $10 billion investment from Amazon as it weighs Trainium chips - Techzine Global
- AWS and OpenAI announce multi-year strategic partnership for AI workloads on AWS infrastructure - About Amazon (AWS News)
Common Questions Answered
What are the performance improvements claimed for Amazon's Trainium3 chips in the OpenAI deal?
AWS states that Trainium3 delivers up to 4.4 times more compute, four times better energy efficiency, and nearly four times higher memory bandwidth compared to Trainium2. These gains are intended to accelerate OpenAI's next‑generation language models.
How much is the financial commitment between OpenAI and Amazon for using Trainium3 accelerators?
OpenAI has signed a ten‑billion‑dollar agreement with Amazon to run its models on Trainium3 hardware. The $10 billion investment ties OpenAI's future workloads to AWS's custom silicon platform.
What previous AWS‑Anthropic collaboration is referenced, and how does it relate to Trainium3?
AWS previously deployed five lakh (500,000) Trainium2 chips in Project Rainier with Anthropic, creating the world’s largest AI compute cluster. The Trainium3 rollout builds on that infrastructure, promising substantially higher performance for OpenAI's models.
What future hardware does AWS preview, and what performance gains are expected over Trainium3?
AWS previewed Trainium4, which is expected to deliver at least six times higher FP4 performance and additional improvements in FP8 performance and memory bandwidth. These advances would further extend the speed and efficiency advantages beyond the 4.4× compute boost of Trainium3.