Microsoft's New Azure Servers Deliver Powerful NVIDIA A100 Ampere AI Supercomputing Boost

azure machine learning
As artificial intelligence expands exponentially, the hardware powering it must expand as well. Therefore, Microsoft’s Azure is announcing their powerful ND A100 v4 virtual machine series.

Each device will be kitted out with eight impressive NVIDIA Ampere A100 Tensor Core GPUs. The ND A100 v4 won’t work alone, however, as the Azure blog post states that, “Just like the human brain is composed of interconnected neurons, our ND A100 v4-based clusters can scale up to thousands of GPUs with an unprecedented 1.6 Tb/s of interconnect bandwidth per VM.” When this horsepower is put together and scaled properly, any level of AI work can be accomplished. Azure claims that the bandwidth of the inter-GPU communications is 16x that of any of their competitors.
nvidia a100
When talking about a VM, however, not everything is about the GPU and bandwidth there. In the ND A100 v4, Azure has chosen to go with an AMD EPYC "Rome" processors with support for higher-bandwidth interfaces like PCIe 4.0. This allows data in the system to move “2x faster than before.” When all is said and done, Azure reports that “Most customers will see an immediate boost of 2x to 3x compute performance over the previous generation of systems based on NVIDIA V100 GPUs with no engineering work.” Right now, these devices are in preview and plan to be regularly offered soon. To read up more on Microsoft's A100 Ampere lovin', be sure to check out the Azure blog post here.