Google, Microsoft, Meta, AMD, and other leading tech giants have announced the formation of the "Ultra Accelerator Link" (UALink) alliance, aimed at challenging the dominance of AI chip leader Nvidia.
UALink is committed to establishing development standards for components that connect AI accelerators in data centers, accelerating the training, fine-tuning, and operation of AI models.
Forrest Norrod, General Manager of AMD’s Data Center Solutions, stated that the AI industry needs open standards that can quickly advance development. He emphasized, "Open formats allow multiple companies to add value to the entire ecosystem." He also highlighted that new standards would enable innovation without being limited by any single company.
The UALink 1.0 standard will connect up to 1024 AI accelerators (GPUs only) on a single compute pod, with the updated UALink 1.1 standard set to be released in Q4 2024. Norrod mentioned that the first UALink products are expected to be launched in the "coming years."
Other members of the alliance include Broadcom, Cisco Systems, Hyve Solutions, and Intel. Nvidia, which holds about 80% of the AI chip market, is notably absent from the group, as are AWS and Broadcom's main competitor, Marvell.
External media analysts suggest that Nvidia, with its own interconnect technology, is unlikely to support a competitor’s standard. Amazon may still be evaluating its options, as it has been cutting back on various internal accelerator hardware efforts. Additionally, AWS, which dominates the cloud services market and primarily provides GPUs from Nvidia to its customers, may find little strategic value in UALink.
Tech media believe that aside from AMD and Intel, the biggest beneficiaries of UALink are likely to be Microsoft, Meta, and Google. These companies spend billions of dollars purchasing GPUs from Nvidia to power their cloud services for AI modeling and are now each developing their own custom chips and AI accelerators.