AMD’s Instinct MI100 GPU targets HPC, surpassing 10 teraflops

AMD Instinct MI100 GPU

AMD has announced the first x86 server GPU to surpass the 10 teraflops (FP64) performance barrier, the AMD Instinct MI100 accelerator.

The 7nm chip, targeted at high performance computing customers, will launch by the end of the year.

The MI100 is capable of 11.5 teraflops of peak FP64 throughput, a three times improvement over the previous-gen MI50. It also boasts a peak throughput of 23.1 teraflops in FP32 workloads, putting it ahead of competitor Nvidia’s A100 – although that GPU comes out in front with other benchmarks. It has a 300W TDP, below Nvidia’s 400W.

AMD has now split its graphics architectures in two, with its RDNA platform focusing on gaming, while its new CDNA platform will be specialized for HPC and AI workloads.

The accelerator features 32GB High-bandwidth HBM2 memory at a clock rate of 1.2 GHz and delivers 1.23TB/s of memory bandwidth to support large data sets. The cards are capable of up to 340 GB/s of aggregate throughput over three Infinity Fabric links and are designed to be deployed into quad-core hives (up to two per server), with each hive supporting up to 552 GB/s of P2P I/O bandwidth.

Dell, HPE, Gigabyte, Lenovo, and Supermicro were among the OEM and ODM partners announcing servers supporting the GPU.

“We’re excited that AMD is making a big impact in high-performance computing with AMD Instinct MI100 GPU accelerators,” said Vik Malyala, SVP of field application engineering and business development, at Supermicro.

“With the combination of the compute power gained with the new CDNA architecture, along with the high memory and GPU peer-to-peer bandwidth the MI100 brings, our customers will get access to great solutions that will meet their accelerated compute requirements and critical enterprise workloads.”

Source: datacenterdynamics.com