2 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Amazon Web Services (AWS) and OpenAI have formed a $38 billion partnership to enhance OpenAI's AI workloads. AWS will provide advanced computing resources, including NVIDIA GPUs and the ability to scale up to millions of CPUs, to support OpenAI's generative AI projects. The infrastructure is designed for high efficiency and low-latency performance.
If you do, here's more
Amazon Web Services (AWS) and OpenAI have forged a multi-year partnership worth $38 billion, aimed at enhancing OpenAI's AI capabilities through AWS's robust infrastructure. OpenAI will gain access to Amazon EC2 UltraServers, which boast hundreds of thousands of NVIDIA GPUs and the potential to scale to tens of millions of CPUs. This infrastructure is designed to support various AI workloads, including running ChatGPT and developing new models. The immediate deployment of this compute capacity is expected to be completed by the end of 2026.
The demand for computing power in AI has surged, prompting OpenAI to leverage AWS’s experience in handling large-scale AI infrastructure. AWS operates clusters with over 500,000 chips, providing the reliability and performance needed for advanced AI tasks. OpenAI's CEO, Sam Altman, emphasized the importance of substantial compute resources for scaling frontier AI technologies. Meanwhile, AWS CEO Matt Garman highlighted that their infrastructure is uniquely positioned to support OpenAI's extensive AI needs.
This partnership builds on previous collaborations between the two companies, including the availability of OpenAI's foundation models on Amazon Bedrock. Numerous organizations, such as Peloton and Thomson Reuters, are already utilizing these models for various applications, from coding to scientific analysis. The integration of OpenAI’s technology into AWS’s offerings aims to deliver cutting-edge AI solutions to a broader audience.
Questions about this article
No questions yet.