OpenAI selects AWS in USD 38 billion deal to run core AI workloads
When OpenAI announced it would move most of its compute to Amazon Web Services, the headline was a $38 billion deal. The press release called it “OpenAI selects AWS in USD 38 billion deal to run core AI workloads,” and that pretty much sums up what’s happening, OpenAI’s next-gen models will run on Amazon’s servers. Matt Garman, AWS’s chief executive, called the move a “foundational step,” saying the cloud giant’s infrastructure should keep up with OpenAI’s growth.
It’s hard to say exactly how the two firms plan to turn that massive capacity into tools that more developers or businesses can actually use. The size of the contract hints at a long-term relationship, but the fine print is still vague. For a startup trying to embed GPT-4, or a large enterprise that relies on OpenAI’s API, the real impact will depend on pricing, latency and how quickly new features roll out.
In short, the partnership promises more power, yet the details will decide who benefits most.
"Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone." Matt Garman, CEO of AWS, said the company's infrastructure would serve as a foundation for OpenAI's growing operations. "As OpenAI continues to push the boundaries of what's possible, AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions," Garman said. "The breadth and immediate availability of optimised compute demonstrates why AWS is uniquely positioned to support OpenAI's vast AI workloads." The new infrastructure, featuring NVIDIA GB200 and GB300 GPUs connected via Amazon EC2 UltraServers, is designed to enable low-latency performance across large-scale clusters.
Will this partnership actually shift OpenAI’s compute game plan? The $38 billion, seven-year deal hooks the startup up with Amazon Web Services for its main AI workloads. OpenAI plans to run hundreds of thousands of NVIDIA GPUs and, if demand spikes, could lean on tens of millions of CPUs.
The company says the move to AWS has already started. Matt Garman, AWS’s chief executive, called the setup a base for OpenAI’s expanding work. OpenAI’s own note talked about a “broad compute ecosystem” that will “bring advanced AI to everyone.” How much of that capacity will really be used?
That’s still fuzzy. The money side is clear, yet it’s hard to say whether the pact will speed up new products or bring cheaper AI to users. Both sides pitch the tie-up as a step toward the “next era” of AI, but they haven’t released hard performance numbers.
As the arrangement rolls out, we’ll be watching for any concrete signs that the promised scale shows up in real results.
Common Questions Answered
What is the monetary value and duration of the deal between OpenAI and AWS?
The agreement is valued at roughly $38 billion and spans seven years. This long‑term contract ties OpenAI's core AI workloads to Amazon Web Services for the duration.
Which hardware resources will OpenAI access through the AWS partnership?
OpenAI will tap into hundreds of thousands of NVIDIA GPUs hosted on AWS and has the option to scale to tens of millions of CPUs if needed. These resources are intended to support the training and inference of its next‑generation models.
How does AWS CEO Matt Garman describe the role of AWS infrastructure for OpenAI?
Matt Garman says AWS's best‑in‑class infrastructure will serve as a foundational backbone for OpenAI's expanding AI ambitions. He emphasizes that the partnership strengthens a broad compute ecosystem that will bring advanced AI to everyone.
What impact is the AWS partnership expected to have on OpenAI's compute strategy?
The partnership is expected to reshape OpenAI's compute strategy by migrating the bulk of its workloads to Amazon's data centers. Immediate migration is already underway, positioning AWS as the primary platform for OpenAI's core AI operations.