Nvidia plans to launch its own Arm-based CPU, Grace.
A data center server product designed specifically for large-scale neural network workloads, Grace is expected to be available in Nvidia products from 2023.
Named after Grace Hopper, the chip comes as Nvidia is in the midst of trying to acquire Arm for $40bn.
“Leading-edge AI and data science are pushing today’s computer architecture beyond its limits – processing unthinkable amounts of data,” said Jensen Huang, Nvidia CEO and founder.
“Using licensed Arm IP, Nvidia has designed Grace as a CPU specifically for giant-scale AI and HPC. Coupled with the GPU and DPU, Grace gives us the third foundational technology for computing, and the ability to re-architect the data center to advance AI. Nvidia is now a three-chip company.”
The company said that it expects a Grace CPU and Nvidia GPU combo to be 10x faster than a current DGX system, which uses GPUs and AMD CPUs (of course, AMD’s CPUs will also be faster by 2023).
It appears that Nvidia will only sell Grace in combination with its GPUs, tightly coupling the two chip types. The CPUs and GPUs will be connected by NVLink 4, with at least >900GB/sec of cumulative bandwidth.
Details about Grace’s specific design and performance have yet to be disclosed, with the company saying it will use a future generation of Arm’s Neoverse CPU cores.
The Swiss National Supercomputing Centre (CSCS) and the US Department of Energy’s Los Alamos National Laboratory both plan to build supercomputers with Grace CPUs in 2023.
Both systems will be built by HPE Cray. CSCS’s Alps supercomputer is expected to have 20 exaflops of AI performance (not Linpack), which would make it the world’s fastest AI-focused supercomputer.
Fewer details are disclosed about the Los Alamos system, other than it will be ‘leadership class.’
“With an innovative balance of memory bandwidth and capacity, this next-generation system will shape our institution’s computing strategy,” said Thom Mason, Los Alamos National Laboratory Director. “Thanks to Nvidia’s new Grace CPU, we’ll be able to deliver advanced scientific research using high-fidelity 3D simulations and analytics with data sets that are larger than previously possible.”