Broadcom’s Chip President Clears the Air: $10B Customer Isn’t OpenAI

  • Broadcom (AVGO) and OpenAI, valued at $500 billion, announced a partnership to co-develop and deploy 10 gigawatts of custom AI accelerators, with initial rack deployments starting late next year and full completion by 2029.
  • Charlie Kawwas clarified that OpenAI is not the mystery $10 billion customer from Broadcom’s September earnings call, which will enhance AI revenue forecasts starting next year, with speculated clients including Google (GOOGL), Meta (META), and ByteDance.
  • This collaboration follows 18 months of joint work and aligns with OpenAI’s multi-billion dollar deals with Advanced Micro Devices (AMD), Nvidia (NVDA), and CoreWeave (CRWV), highlighting a shift toward diversified AI hardware supply chains.

avgo

Broadcom (AVGO), a leading semiconductor firm, has deepened its collaboration with OpenAI, the AI powerhouse valued at $500 billion, to co-develop and deploy 10 gigawatts of custom artificial intelligence accelerators. This partnership, announced on Monday, underscores the intensifying competition in the AI hardware ecosystem, where hyperscalers and startups alike are racing to secure tailored compute solutions amid surging demand for advanced models.

Charlie Kawwas, president of Broadcom’s semiconductor solutions group, clarified during a CNBC appearance alongside OpenAI President Greg Brockman that the startup does not represent the mystery $10 billion customer referenced in the company’s September earnings call. That unnamed client, whose orders for custom AI chips are set to boost Broadcom’s AI revenue forecast for next year upon shipments commencing, remains undisclosed. Analysts have long speculated on identities such as Google (GOOGL), Meta (META), and ByteDance, the parent of TikTok, as Broadcom’s major web-scale partners, given their voracious appetite for specialized silicon to power data centers.

Kawwas, speaking lightheartedly, expressed enthusiasm for a potential $10 billion purchase order from Brockman, though no such commitment has materialized. The joint Broadcom-OpenAI initiative, however, marks a tangible step forward after 18 months of groundwork. Deployment of racks featuring these bespoke chips will commence late next year, with full rollout targeted for 2029. Brockman emphasized the strategic edge of this hardware integration, noting it allows OpenAI to infuse insights from its frontier models directly into the silicon for enhanced performance.

This move aligns with OpenAI’s aggressive expansion of AI infrastructure, evidenced by recent multi-billion dollar pacts with Advanced Micro Devices (AMD), Nvidia (NVDA), and CoreWeave (CRWV). Such alliances reflect a broader industry shift toward diversified supply chains, as AI leaders mitigate risks from single-vendor dependencies – particularly Nvidia’s dominance in GPUs – while optimizing costs and efficiency for training and inference workloads. Broadcom’s expertise in application-specific integrated circuits positions it uniquely to capture this growth, with custom AI accelerators projected to drive a significant portion of its revenue trajectory in the coming years. As these deployments scale, they will not only accelerate OpenAI’s roadmap but also signal a maturing market where hardware innovation becomes as critical as software breakthroughs in sustaining AI’s exponential progress.

WallStreetPit does not provide investment advice. All rights reserved.

Disclaimer: This page contains affiliate links. If you choose to make a purchase after clicking a link, we may receive a commission at no additional cost to you. Thank you for your support!

About Ron Haruni 1344 Articles
Ron Haruni

Be the first to comment

Leave a Reply

Your email address will not be published.


*

This site uses Akismet to reduce spam. Learn how your comment data is processed.