OpenAI's Ambitious Chip and Cloud Deals
OpenAI has secured significant deals worth $10 billion in chips and cloud services. However, these agreements are strategically designed to prevent any single provider, including Intel, Google, or Amazon, from gaining dominant control. This deliberate strategy aims to distribute power, facilitate rapid scaling, and maintain control over its technological infrastructure.
Shifting Dependencies in AI Hardware
While Nvidia currently powers OpenAI's operations, this reliance is expected to diminish. Following Nvidia's strong third-quarter earnings report, Jensen Huang acknowledged that all of OpenAI's current activities run on Nvidia hardware. However, OpenAI has been actively diversifying its chip sources to ensure faster development and reduce dependence on any single supplier.
OpenAI Taps Cerebras, Broadcom, and AMD to Spread Chip Bets Wide
The Cerebras deal represents a key component of OpenAI's broader strategy. The agreement involves utilizing 750 megawatts of Cerebras chips through 2028, which will be crucial for running large AI models and intensive workloads. This initiative is part of a larger infrastructure investment that began last year, involving partnerships with Nvidia, AMD, and Broadcom, and has contributed to OpenAI's substantial private market valuation.
In September, Nvidia committed $100 billion to support the development of 10 gigawatts of systems for OpenAI, an energy equivalent to that used by 8 million homes annually. This commitment was projected to require 4 to 5 million GPUs. However, OpenAI's strategy extends beyond Nvidia. Shortly after the Nvidia announcement, OpenAI revealed another significant deal for 10 gigawatts worth of custom AI accelerators, known as XPUs, from Broadcom. These custom chips have been developed in collaboration with OpenAI for over a year. The Broadcom partnership significantly boosted its market value, demonstrating the impact of OpenAI's strategic alliances.
Google, Amazon, and Intel Left Out as OpenAI Builds Its Own Chip Stack
Major tech companies like Amazon, Google, and Intel have a more limited role in OpenAI's current hardware strategy. While OpenAI has a $38 billion cloud deal with Amazon Web Services (AWS) and AWS is assisting in building new data centers, there is no firm commitment to utilize Amazon's proprietary Inferentia or Trainium chips. Discussions are ongoing, but these specific chip integrations are not yet finalized.
Google Cloud is also providing capacity under a previous agreement. However, OpenAI has expressed no interest in using Google's tensor processing units (TPUs), even with Broadcom's involvement in their production. Intel, which reportedly had an early opportunity to invest in and supply OpenAI, has fallen behind. In an effort to compete, Intel introduced its Crescent Island chip for AI inference in October, but widespread sampling is not expected until late 2026. Intel's efforts to remain competitive in the AI sector have necessitated financial support from Nvidia and the U.S. government, with its impact to be assessed during upcoming tech earnings reports.

