In the AI gold rush, NVIDIA is the one selling the shovels—and AMD is still setting up its stall.
Once hailed for its underdog comeback in CPUs and gaming GPUs, AMD now faces a new uphill battle: catching up in the AI accelerator market. Despite CEO Lisa Su’s bold moves—including the $50 billion acquisition of Xilinx and the launch of the MI300X GPU—AMD is still chasing NVIDIA’s shadow. The question is no longer whether AMD is serious about AI, but whether it’s too late to matter.
NVIDIA’s dominance in AI didn’t happen overnight. Back in 2006, it launched CUDA, a GPU programming platform that became the backbone of AI research. By the time AlexNet won the 2012 ImageNet competition using NVIDIA GPUs, the company had already embedded itself into the DNA of machine learning. Fast forward to today, and its Hopper-based H100 and upcoming Blackwell B200 chips are the default choice for training and inference at hyperscalers like Google and Meta.
AMD, meanwhile, was busy clawing back CPU market share from Intel and powering consoles like the PS4 and Xbox One. Its focus on gaming and general-purpose computing paid off in the short term, but left it flat-footed when AI exploded. By the time AMD pivoted to AI, NVIDIA had already built a decade-long moat of hardware, software, and developer mindshare.
AMD’s MI300X is no slouch. With 192GB of HBM3 memory and a chiplet-based design derived from the El Capitan supercomputer, it’s a formidable piece of silicon. In some benchmarks, it even outperforms NVIDIA’s H100. But performance isn’t the whole story. NVIDIA’s ecosystem—CUDA, cuDNN, TensorRT—is deeply entrenched, making it hard for AMD to displace existing workflows. As one analyst noted, “NVIDIA’s Hopper H200 retains a significant performance advantage over the MI300X. Expect that gap to expand even further with Blackwell and Rubin” .
Moreover, AMD’s MI300X is just now reaching customers, while NVIDIA is already shipping its next-gen chips. In the fast-moving AI market, being late—even by a year—can be fatal.
Recognizing that hardware alone isn’t enough, AMD acquired Nod.ai in 2023 to bolster its AI software capabilities. Nod.ai’s compiler-based automation software aims to simplify the deployment of AI models on AMD hardware . But building a software ecosystem takes time, and AMD is starting from behind. NVIDIA’s CUDA has been the industry standard for years, and retraining developers to use a new platform is a monumental task.
AMD’s $50 billion acquisition of Xilinx in 2022 was a bold bet on adaptive computing. Xilinx’s FPGAs and AI engines offer flexibility that fixed-function GPUs can’t match . In theory, this gives AMD an edge in specialized AI workloads. In practice, integrating Xilinx’s technology into AMD’s product stack has been slow, and the payoff remains uncertain.
AMD’s stock has taken a hit recently, with shares down 3.7% after a lackluster Advancing AI 2024 event . Analysts have downgraded the stock, citing concerns about AMD’s ability to compete with NVIDIA in AI. While some remain bullish, pointing to AMD’s potential in inference workloads, others worry that the company is perpetually playing catch-up.
Lisa Su has orchestrated one of the most impressive turnarounds in tech history, transforming AMD from a struggling CPU maker into a formidable competitor. But the AI race is a different beast. NVIDIA’s early investments in software and ecosystem have created a near-insurmountable lead. AMD’s recent moves are promising, but the company is still years behind. In the world of AI, where speed and scale are everything, catching up may not be enough.