We’ve hardly heard of Nvidia in the CPU field for years, following the lackluster arrival of the Project Denver CPU and its accompanying Tegra K1 mobile processors in 2014. But now the company is going back in a big way to CPUs with the new Nvidia Grace, an Arm-based processing chip designed specifically for AI data centers.
It’s a good time for Nvidia to bow its Arm: it is currently trying to buy Arm itself for $ 40 billion, specifically pitching it as an attempt “ to create the world’s premier computing company for the age of AI, ” and this chip is perhaps the first point of proof. Arm is also having a moment in the consumer computing space, where Apple’s M1 chips recently turned our concept of laptop performance upside down. It is of course also more competition for Intel, whose shares fell after Nvidia’s announcement
The new Grace is named after a computer pioneer Grace Hopper, and it comes in 2023 to bring “10x the performance of today’s fastest servers on the most complex AI and high-performance computing workloads,” said Nvidia. That, of course, makes it attractive to research firms building supercomputers, such as the Swiss National Supercomputing Center (CSCS) and Los Alamos National Laboratory are already registered to build also in 2023.
A Grace Next is also already on the roadmap for 2025. Here’s a slide from Nvidia’s GTC 2021 presentation where the news was announced:
I recommend reading what our friends put on AnandTech must say about where Grace might fit in the data center market and Nvidia’s ambitions. It’s worth noting that Nvidia isn’t releasing many specs yet, but Nvidia does say it has a fourth-gen NVLink with a record-breaking 900 GB / s connection between the CPU and GPU. Crucially, this is greater than the memory bandwidth of the CPU, which means that NVIDIA’s GPUs have a cache-coherent link to the CPU that can access the system’s full-bandwidth memory, leaving the entire system with a single shared memory. . address space, ”writes AnandTech.