Tech companies form an industry group to help develop next-generation AI chip components.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Intel, Google, Microsoft, Meta and other tech heavyweights are forming a new industry group, the Ultra Accelerator Link (UALink) Promoter Group, to guide the development of components that interconnect AI accelerator chips in data centers. .

Announced Thursday, the UALink promoter group — which also counts AMD (but not ARM), Hewlett-Packard Enterprise, Broadcom and Cisco among its members — will integrate the AI ​​accelerator chips found in a growing number of servers. is proposing a new industry standard for Broadly defined, AI accelerators are chips ranging from GPUs to custom-designed solutions to accelerate the training, fine-tuning and running of AI models.

“The industry needs an open standard that can be pushed out into the open very quickly. [format] This allows multiple companies to add value to the overall ecosystem,” Forrest Norrod, AMD's GM of data center solutions, told reporters at a briefing Wednesday. “The industry needs a standard that is Allows innovation to proceed at a faster clip without being hindered by a company.”

One version of the proposed standard, UALink 1.0, would connect up to 1,024 AI accelerators — GPUs only — into a single computing “pod.” (The group explains a The pod (as one or several racks in a server.) UALink 1.0, based on “open standards” including AMD's Infinity Fabric, will allow direct loads and stores between memory connected to AI accelerators, and reduce data transfer latency. Usually increases speed while decreasing. According to the UALink Promoter Group, current interconnect specifications

Image credit: UALink Promoter Group

The group says it will form a consortium, the UALink Consortium, in Q3 to oversee development of the UALink spec. UALink 1.0 will be made available to companies joining the consortium at the same time, with a higher bandwidth updated spec, UALink 1.1, set to arrive in Q4 2024.

The first UALink products will launch “within the next couple of years,” Norrod said.

Conspicuously absent from the group's member list is Nvidia, the largest AI accelerator maker with an estimated 80% to 95% market share. Nvidia declined to comment for this story. But it's not hard to see why the chipmaker isn't enthusiastically throwing its weight behind UALink.

For one, Nvidia offers its proprietary interconnect tech for connecting GPUs within a data center server. The company may not be willing to support speculation based on any competing technologies.

Then there's the fact that Nvidia is operating from a position of enormous power and influence.

In Nvidia's most recent fiscal quarter (Q1 2025), the company's data center sales, which include sales of its AI chips, grew more than 400% from the year-ago quarter. If Nvidia continues at its current pace, it will overtake Apple as the world's second most valuable firm sometime this year.

So, simply put, Nvidia doesn't have to play ball if it doesn't want to.

As for Amazon Web Services (AWS), the only public cloud joint not contributing to UALink, it may be in a “wait-and-see” mode as it attempts various in-house accelerator hardware. Stays away. It may also be that AWS, with its dominance of the cloud services market, doesn't see much of a strategic point in opposing Nvidia, which provides more GPUs to consumers.

AWS did not respond to TechCrunch's request for comment.

In fact, UALink's biggest beneficiaries — aside from AMD and Intel — are Microsoft, Meta and Google, which together spend billions of dollars on Nvidia GPUs to power their clouds and train their growing AI models. have done All are eager to rid themselves of a vendor they see as alarmingly dominant in the AI ​​hardware ecosystem.

Google has AI models, TPUs and custom chips for training and running Axion. Amazon has several AI chip families under its belt. Microsoft jumped into the fray last year with Maya and Cobalt. And Meta is improving its lineup of accelerators.

Meanwhile, Microsoft and its close partner, OpenAI, reportedly plan to spend at least $100 billion on a supercomputer to train AI models built with future versions of Cobalt and Maya chips. will be done. These chips will need something to connect to them – and it will probably be UALink.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment