March 26, 2025

[ad_1]

Right now, Azure broadcasts the overall availability of the Azure ND A100 v4 Cloud GPU cases—powered by NVIDIA A100 Tensor Core GPUs—attaining leadership-class supercomputing scalability in a public cloud. For demanding clients chasing the subsequent frontier of AI and high-performance computing (HPC), scalability is the important thing to unlocking improved Complete Price of Answer and Time-to-Answer. 

Merely put, ND A100 v4 powered by NVIDIA A100 GPUs is designed to let our most demanding clients scale up and scale out with out slowing down.

Benchmarking with 164 ND A100 v4 digital machines on a pre-release public supercomputing cluster yielded a Excessive-Efficiency Linpack (HPL) results of 16.59 petaflops. This HPL consequence, delivered on public cloud infrastructure, would fall throughout the High 20 of the November 2020 High 500 listing of the quickest supercomputers on the earth, or prime 10 in Europe, primarily based on the area the place the job was run.

Measured through HPL-AI, a man-made intelligence (AI) and machine studying (ML)-focused Excessive-Efficiency Linpack variant, the identical 164-VM pool achieved a 142.Eight Petaflop consequence, putting it among the many world’s High 5 quickest identified AI supercomputers as measured by the official HPL-AI benchmark listing. These HPL outcomes, using solely a fraction of a single public Azure cluster, rank with essentially the most highly effective devoted, on-premises supercomputing assets on the earth.

And as we speak, as ND A100 v4 goes to normal availability, we’re saying the rapid availability of the world’s quickest public cloud supercomputers on-demand, close to you, via 4 Azure areas: East United States, West United States 2, West Europe, and South Central United States.

The ND A100 v4 VM collection begins with a single digital machine (VM) and eight NVIDIA Ampere architecture-based A100 Tensor Core GPUs, and may scale as much as hundreds of GPUs in a single cluster with an unprecedented 1.6 Tb/s of interconnect bandwidth per VM delivered through NVIDIA HDR 200Gb/s InfiniBand hyperlinks: one for every particular person GPU. Moreover, each Eight-GPU VM encompasses a full complement of third technology of NVIDIA NVLink enabling GPU to GPU connectivity throughout the VM in extra of 600 gigabytes per second. 

Constructed to make the most of de-facto business normal HPC and AI instruments and libraries, clients can leverage ND A100 v4’s GPUs and distinctive interconnect capabilities with none particular software program or frameworks, utilizing the identical NVIDIA NCCL 2 libraries that the majority scalable GPU-accelerated AI and HPC workloads help out-of-box, with none concern for underlying community topology or placement. Provisioning VMs throughout the identical VM Scale Set robotically configures the interconnect cloth.

Anybody can convey demanding on-premises AI and HPC workloads to the cloud through ND A100 v4 with minimal fuss, however for purchasers preferring an Azure-native method, Azure Machine Studying supplies a tuned digital machine (pre-installed with the required drivers and libraries) and container-based environments optimized for the ND A100 v4 household. Pattern recipes and Jupyter Notebooks assist customers get began shortly with a number of frameworks together with PyTorch, TensorFlow, and coaching state-of-the-art fashions like BERT. With Azure Machine Studying, clients have entry to the identical instruments and capabilities in Azure as our AI engineering groups.

Every NVIDIA A100 GPU gives 1.7 to three.2 occasions the efficiency of prior V100 GPUs out-of-the-box and as much as 20 occasions the efficiency when layering new architectural options like mixed-precision modes, sparsity, and Multi-Occasion GPU (MIG) for particular workloads. And on the coronary heart of every VM is an all-new 2nd Era AMD EPYC platform, that includes PCI Specific Gen four.Zero- for CPU to GPU transfers twice as quick as prior generations.

We will’t wait to see what you’ll construct, analyze, and uncover with the brand new Azure ND A100 v4 platform.



Measurement

Bodily CPU Cores

Host Reminiscence  (GB)

GPUs

Native NVMe Momentary Disk

NVIDIA InfiniBand Community

Azure community

Standard_ND96asr_v4

96

900 GB

Eight x 40 GB NVIDIA A100

6,500 GB

Eight x 200 Gbps

40 Gbps

Study extra

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *