MENU

Nvidia’s Huang: DeepSeek Isn’t a Threat but a Catalyst

  • Nvidia (NVDA) CEO Jensen Huang, at a Thursday virtual event, dismissed the $600 billion market cap drop after DeepSeek’s R1 release as a misinterpretation, arguing that post-training methods needing robust computing power keep Nvidia’s chips vital.
  • DeepSeek’s cost-efficient, open-source AI model sparked a sell-off, but Huang hailed its global energizing effect, while insisting that the growing complexity of post-training ensures demand for Nvidia’s technology.
  • Amid recovery from the January rout and prepping for a February 26 earnings call, Huang countered scaling doubts – echoed by AMD’s Lisa Su – reaffirming Nvidia’s central role in an AI landscape shifting toward reasoning and adoption.

nvidia

Nvidia (NVDA) CEO Jensen Huang, speaking at a virtual event aired Thursday to debut DDN’s Infinia platform, pushed back against the market’s dramatic reaction to DeepSeek’s R1 model, a Chinese AI breakthrough that shaved nearly $600 billion off Nvidia’s market cap in January and slashed nearly 20% from his personal net worth. Investors, spooked by the notion that DeepSeek’s open-source reasoning model – built on weaker chips with a fraction of the funding of Western AI giants – might signal a reduced need for Nvidia’s high-powered GPUs, dumped shares en masse, fearing that the hundreds of billions of dollars Big Tech has poured into AI infrastructure could be overkill. Huang called this sell-off a misstep, arguing that the industry’s focus isn’t just on pre-training models but hinges critically on post-training methods—processes that demand substantial computing power to refine AI’s ability to reason and predict, areas where Nvidia’s chips remain indispensable.

DeepSeek, birthed by High-Flyer, a Chinese hedge fund, unleashed R1 as a competitive jolt to the AI landscape, prompting questions about whether cost-efficient models could upend the scaling paradigm that’s fueled Nvidia’s meteoric rise. Huang countered that view, insisting that post-training is “the most important part of intelligence,” where models learn to tackle complex problems—a phase he described as “really quite intense” and ripe for growth, ensuring continued demand for Nvidia’s technology as these methods diversify. He framed DeepSeek’s innovation as a spark igniting global AI excitement, noting, “The energy around the world as a result of R1 becoming open-sourced—incredible,” a sentiment echoed by Nvidia’s rival Advanced Micro Devices (AMD) CEO Lisa Su, who earlier this month praised DeepSeek for driving adoption-friendly innovation.

The market’s knee-jerk reaction – since largely reversed as Nvidia’s stock clawed back value – underscored a broader anxiety Huang has been battling for months: that model scaling might be faltering, a concern predating DeepSeek’s January splash when whispers of OpenAI’s slowing progress stoked doubts about AI’s payoff. Huang, unwavering, has maintained since November that scaling thrives, simply shifting from training to inference and now post-training, where computational heft remains king. With Nvidia’s first 2025 earnings call looming on February 26, his Thursday remarks signal a defiant stance—DeepSeek isn’t a threat but a catalyst, amplifying the AI boom’s momentum and reinforcing Nvidia’s pivotal role as all ‘Mag 7’ members grapple with its implications on their own earnings stages.

WallStreetPit does not provide investment advice. All rights reserved.

Be the first to comment

Leave a Reply

Your email address will not be published.


*

This site uses Akismet to reduce spam. Learn how your comment data is processed.