Nvidia's Jensen Huang downplays the competition, Micron disappoints.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Nvidia will remain the gold standard for AI training chips, CEO Jensen Huang told investors, even as rivals push to erode its market share and one of Nvidia's biggest suppliers of AI chips. is predicted to be lower for sales.

Everyone from OpenAI to Elon Musk's Tesla relies on Nvidia semiconductors to power their big language or computer vision models. The rollout of Nvidia's “Blackwell” system later this year will only cement that lead, Huang said at the company's annual shareholder meeting on Wednesday.

Unveiled in March, Blackwell is the next generation of AI training processors that follow its flagship “Hopper” line of H100 chips—one of the most valuable assets in the tech industry, priced in the tens of thousands. It's dollars.

“The Blackwell architecture platform is probably the most successful product in our history and even the history of computers,” Huang said.

Nvidia briefly eclipsed Microsoft and Apple this month to become the world's most valuable company in a remarkable rally that has fueled much of this year's gains in the S&P 500 index. At more than $3 trillion, Huang's company was at one point worth more than entire economies and stock markets, only to suffer a record loss in market value as investors locked in profits.

Yet as long as Nvidia chips continue to be the standard for AI training, there's no reason to believe the long-term outlook is cloudy and the fundamentals look strong here.

One of Nvidia's key strengths is the sticky AI ecosystem known as CUDA, short for Compute Unified Device Architecture. Just as everyday users are nervous about switching from their Apple iOS device to a Samsung phone using Google Android, a whole bunch of developers have been working with CUDA for years and feel comfortable enough that There is no reason to consider using another software platform. Like hardware, CUDA has effectively become a standard of its own.

“The Nvidia platform is widely available through every major cloud provider and computer manufacturer, which creates a large and attractive base for developers and customers, making our platform a popular choice for our customers,” Huang added Wednesday. makes it more valuable,” Huang added on Wednesday.

Micron's in-line guidance for next quarter's earnings isn't enough for bulls.

AI trading took a recent hit after memory chip supplier Micron Technology, which supplies high-bandwidth memory (HBM) chips to companies like Nvidia, reported fiscal fourth-quarter revenue of about $7.6 billion, in line with market expectations.

Shares in Micron fell 7%, marginally benefiting from a big margin in the broader tech-heavy Nasdaq Composite.

In the past, Micron and its Korean rivals Samsung and SK Hynix have seen cyclical booms and busts in the memory chip market, which has long been considered a commodity business compared to logic chips like graphics processors.

But the demand for its chips needed for AI training has fueled excitement. of microns The stock has more than doubled over the past 12 months, meaning investors have already priced in much of management's forecast.

“The guidance was basically in line with expectations and in the AI ​​hardware world if you lead in line that's considered a minor disappointment,” says Jane Munster, a tech investor with Deepwater Asset Management. “Momentum investors didn't see that additional reason to be more positive about the story.”

Analysts closely track demand for high-bandwidth memory as a key indicator for the AI ​​industry because it is critical to solving the biggest economic hurdle facing AI training today: scaling.

HBM chips solve the scaling problem in AI training.

Costs do not increase significantly with the complexity of the model – its number of parameters, which can number in the billions – but rather exponentially. This results in diminishing returns in performance over time.

Even if revenue grows at a steady rate, losses risk running into the billions or tens of billions annually as a model develops further. That threatens to overwhelm any company that doesn't have a deep-pocketed investor like Microsoft capable of making sure OpenAI can still “pay the bills,” as C EO Sam Altman said recently.

One of the main reasons for diminishing returns is the widening gap between the two factors that determine AI training performance. The first is the raw compute power of the logic chip—as measured by FLOPS, a type of calculation per second—and the second is the memory bandwidth it needs to rapidly feed data—often millions of transfers per second, or Expressed in MT/s.

Because they work together, scaling one without the other just leads to waste and inefficiency. This is why using FLOPS, or how much computation can actually be performed, is a key metric when evaluating the cost-effectiveness of AI models.

Sold out by the end of next year.

As Micron points out, data transfer rates have failed to keep pace with increasing computing power. The resulting bottleneck, often referred to as the “memory wall”, is a major cause of today's inherent inefficiencies when scaling AI training models.

This explains why the U.S. government focused so much on memory bandwidth when deciding which specific Nvidia chips to ban from being exported to China to undermine Beijing's AI development program.

On Wednesday, Micron said its HBM business was “sold out” by the end of next calendar year, putting its fiscal year back by a quarter, echoing similar comments from Korean rival SK Hynix. .

“We expect several hundred million dollars in revenue from HBM in FY24 and beyond. [billions of dollars] in HBM earnings in FY25,” Micron said on Wednesday.

Subscribe to the Fortune Next to Lead newsletter to get weekly strategies on how to make it to the corner office. Register for free.
WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment