Prospects around the globe depend on Microsoft Azure to drive improvements associated to the environment, public well being, vitality sustainability, climate modeling, financial development, and extra. Discovering options to those essential challenges requires enormous quantities of centered computing energy. Prospects are more and more discovering the easiest way to entry such high-performance computing (HPC) by means of the agility, scale, safety, and modern efficiency of Azure’s purpose-built HPC and AI cloud providers.
Azure’s market-leading imaginative and prescient for HPC and AI is predicated on a core of real and acknowledged HPC experience, utilizing confirmed HPC expertise and design ideas, enhanced with one of the best options of the cloud. The result’s a functionality that delivers efficiency, scale, and worth, in contrast to another cloud. This implies functions are scaling 12 occasions increased than different public clouds. It means increased utility efficiency per node. It means powering AI workloads for one buyer with a supercomputer match to be among the many high 5 on this planet. It additionally means delivering huge compute energy into the fingers of medical researchers over a weekend to show out life-saving improvements within the struggle in opposition to COVID-19.
This 12 months throughout NVIDIA GTC 21, we’re spotlighting a few of the most transformational functions powered by NVIDIA accelerated computing that highlights our dedication to edge, on-premises, and cloud computing. Registration is free, so signal as much as learn the way Microsoft is powering transformation.
AI and supercomputing scale
The AI and machine studying area continues to be one of the crucial inspiring areas of technical evolution because the web. The pattern towards utilizing huge AI fashions to energy numerous duties is altering how AI is constructed. Coaching fashions at this scale requires giant clusters of lots of of machines with specialised AI accelerators interconnected by high-bandwidth networks inside and throughout the machines. Now we have been constructing such clusters in Azure to allow new pure language technology and understanding capabilities throughout Microsoft merchandise.
The work that we’ve achieved on large-scale compute clusters, main community design, and the software program stack, together with Azure Machine Studying, ONNX Runtime, and different Azure AI providers, to handle it’s immediately aligned with our AI at Scale technique.
Machine studying on the edge
Microsoft offers varied options within the clever edge portfolio to empower prospects to be sure that machine studying not solely occurs within the cloud but additionally on the edge. The options embrace Azure Stack Hub, Azure Stack Edge, and IoT Edge.
Whether or not you’re capturing sensor knowledge and inferencing on the edge or performing end-to-end processing with mannequin coaching in Azure and leveraging the educated fashions on the edge for enhanced inferencing operations—Microsoft can assist your wants nonetheless and wherever it’s worthwhile to.
Visualization and GPU workstations
Azure allows a variety of visualization workloads, that are vital for desktop virtualization in addition to skilled graphics equivalent to computer-aided design, content material creation, and interactive rendering. Visualization workloads on Azure are powered by NVIDIA’s world-class graphics processing models (GPUs) and RTX expertise, the world’s preeminent visible computing platform.
With entry to graphics workstations on Azure cloud, artists, designers, and technical professionals can work remotely, from wherever, and from any related system. See our NV-Collection digital machines (VMs) for Home windows and Linux.
Recapping 2021 moments with Azure and NVIDIA applied sciences
Wildlife Safety Providers
From deforestation to wildfire administration to defending endangered animals, finding out wildlife populations is crucial to a sustainable future. Find out how Wildlife Safety Providers works with Microsoft AI for Earth to supply the monitoring expertise that conservation teams have to preserve watch over wild locations and shield wildlife, utilizing an infrastructure of Azure Excessive Efficiency Computing digital machines with NVIDIA V100 GPUs.
Van Gogh Museum
With tens of 1000’s of Chinese language guests annually, the Van Gogh Museum needed to create one thing distinctive for this viewers. Enter WeChat, an app that might rework portrait images into digital work harking back to Van Gogh’s artwork. Customers, capable of see how the artist would have painted them, would ideally be drawn nearer to his artwork by means of this distinctive, private expertise. Examine how the Van Gogh Museum completed this by means of using Azure Excessive Efficiency Computing, Azure Machine Studying, and extra.
FLSmidth has an bold purpose of zero emissions by 2030 however they had been hampered by latency and efficiency limitations of their on-premises infrastructure. By transferring to Microsoft Azure in collaboration with companion Ubercloud, FLSmidth discovered the proper automobile for optimizing the engineering simulation platforms that rely on high-performance computing. The change has eliminated all latency, democratized their platform, and produced outcomes 10 occasions quicker than their earlier infrastructure.
Earlier 2021 Azure HPC and AI product launches
Azure declares common availability of scale-out NVIDIA A100 GPU Clusters: the quickest public cloud supercomputer—the Azure ND A100 v4 Digital Machine—powered by NVIDIA A100 Tensor Core GPUs – are designed to let our most demanding prospects scale up and scale out with out slowing down.
Within the June 2021 TOP500 record Microsoft Azure took public cloud providers to a brand new degree, demonstrating work on methods that took 4 consecutive spots from No. 26 to No. 29 on the TOP500 record. They’re components of a worldwide AI supercomputer referred to as the ND A100 v4 cluster, obtainable on demand in 4 world areas at the moment. These rankings had been achieved on a fraction of our total cluster measurement. Every of the methods delivered 16.59 petaflops on the HPL benchmark often known as Linpack, a standard measure of HPC efficiency on 64-bit floating level math that’s the idea for the TOP500 rankings.
Azure declares the DeepSpeed-and Megatron-powered Megatron-Turing Pure Language Technology mannequin (MT-NLG), the biggest and essentially the most highly effective monolithic transformer language mannequin educated to this point, with 530 billion parameters. It’s the results of a analysis collaboration between Microsoft and NVIDIA to additional parallelize and optimize the coaching of very giant AI fashions.
Be part of us on the NVIDIA GTC Fall 2021 convention
Microsoft Azure is sponsoring NVIDIA GTC 2021 convention workshops and coaching. The NVIDIA Deep Studying Institute (DLI) provides hands-on coaching in AI, accelerated computing, and accelerated knowledge science to assist builders, knowledge scientists, and different professionals remedy their most difficult issues. These in-depth workshops are taught by consultants of their respective fields, delivering industry-leading technical data to drive breakthrough outcomes for people and organizations.
On-demand Microsoft periods with GTC
Microsoft session recordings can be obtainable on the GTC website beginning April 12, 2021. You could find a listing of the Microsoft digital periods together with corresponding hyperlinks within the Microsoft Tech Neighborhood weblog right here.