# NVIDIA DGX GB200

The AI factory purpose-built for state-of-the-art AI models.

## Enterprise Infrastructure for Mission-Critical AI

NVIDIA DGX™ GB200 is purpose-built for training and inferencing trillion-parameter [generative AI models](https://www.nvidia.com/en-us/ai-data-science/generative-ai.md). Designed as a rack-scale solution, each liquid-cooled rack features 36 [NVIDIA GB200 Grace Blackwell Superchips](https://www.nvidia.com/en-us/data-center/gb200-nvl72.md)—–36 NVIDIA Grace CPUs and 72 Blackwell GPUs—–connected as one with [NVIDIA NVLink™](https://www.nvidia.com/en-us/data-center/nvlink.md). Multiple racks can be connected with [NVIDIA Quantum InfiniBand](https://www.nvidia.com/en-us/networking/products/infiniband.md) to scale up to hundreds of thousands of GB200 Superchips.

### Advanced AI Infrastructure for Generative AI

Learn how NVIDIA DGX GB200 systems accelerate AI innovation.

[Read the Datasheet](https://resources.nvidia.com/en-us-dgx-systems/dgx-superpod-gb200-datasheet)

### Successful Enterprise Deployments

Read how the [NVIDIA DGX platform](https://www.nvidia.com/en-us/data-center/dgx-platform.md) and [NVIDIA NeMo™](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework.md) have empowered leading enterprises.

[Download the Ebook](https://resources.nvidia.com/en-us-dgx-systems/nvidia-llm-customers-ebook-dgx-superpod-nemo-us-web)

Benefits

## Turnkey AI Factory Infrastructure

### Maximize Developer Productivity

An intelligent control plane tracks thousands of data points across hardware, software, and data center infrastructure to ensure continuous operation and data integrity, plan for maintenance, and automatically reconfigure the cluster to avoid downtime.

### Massive Supercomputing for Generative AI

Scaling up to tens of thousands of NVIDIA GB200 Superchips, NVIDIA DGX GB200 effortlessly performs training and inference on state-of-the-art trillion-parameter [generative AI models](https://www.nvidia.com/en-us/ai-data-science/generative-ai.md).

### Built on NVIDIA Grace Blackwell

NVIDIA GB200 Superchips, each with one NVIDIA Grace CPU and two NVIDIA Blackwell GPUs, are connected via fifth-generation NVLink to achieve 1.8 terabytes per second (TB/s) of GPU-to-GPU bandwidth.

Specifications

## NVIDIA DGX GB200 Specifications

|  |  |
| --- | --- |
| **GPU** | 72x NVIDIA Blackwell GPUs, 36x NVIDIA Grace CPUs |
| **CPU Cores** | 2,592 Arm® Neoverse V2 cores |
| **GPU Memory | Bandwidth** | Up to 13.4 TB HBM3e | 576 TB/s |
| **Total Fast Memory** | 30.2 TB |
| **Performance** | FP4 Tensor Core: 1,440 PFLOPS | 720 PFLOPS\*  FP8/FP6 Tensor Core: 720 PFLOPS | 360 PFLOPS\* |
| **Interconnect** | 72x OSFP single-port NVIDIA ConnectX®-7 VPI  with 400 Gb/s NVIDIA InfiniBand  36x dual-port NVIDIA BlueField®-3 VPI with  200 Gb/s NVIDIA InfiniBand and Ethernet |
| **NVIDIA NVLink Switch System** | 9x L1 NVIDIA NVLink Switches |
| **Management Network** | Host baseboard management controller (BMC) with RJ45 |
| **Software** | NVIDIA Mission Control  NVIDIA AI Enterprise  NVIDIA DGX OS / Ubuntu |
| **Enterprise Support** | Three-year business-standard hardware and software support |
| \*Specification shown in sparse | dense. | |

Resources

## Delivering AI Factories to Every Enterprise

### NVIDIA DGX SuperPOD

NVIDIA DGX SuperPODTM is a turnkey AI data center infrastructure solution that delivers uncompromising performance for every user and workload. Configurable with any DGX system, DGX SuperPOD provides leadership-class accelerated infrastructure with scalable performance for the most demanding AI training and inference workloads, with industry-proven results, allowing IT to deliver performance without compromise.

[AI Infrastructure for Enterprise Deployments](https://www.nvidia.com/en-us/data-center/dgx-superpod.md)

### Introducing NVIDIA Mission Control

NVIDIA Mission Control streamlines AI factory operations, from workloads to infrastructure, with world-class expertise delivered as software. It powers NVIDIA Blackwell data centers, bringing instant agility for inference and training while providing full-stack intelligence for infrastructure resilience. Every enterprise can run AI with hyperscale efficiency, simplifying and accelerating AI experimentation.

[Run Models, Automate the Essentials](https://www.nvidia.com/en-us/data-center/mission-control.md)

### Maximize the Value of the NVIDIA DGX Platform

NVIDIA Enterprise Services provide support, education, and infrastructure specialists for your NVIDIA DGX infrastructure. With NVIDIA experts available at every step of your AI journey, Enterprise Services can help you get your projects up and running quickly and successfully.

[Learn More About NVIDIA DGX Enterprise Services](https://www.nvidia.com/en-us/data-center/dgx-support.md)

Get Started

## Take the Next Steps

### Get the NVIDIA DGX Platform

The NVIDIA DGX platform is made up of a wide variety of products and services to fit the needs of every AI enterprise.

[Get DGX](https://www.nvidia.com/en-us/data-center/get-dgx.md)

### Discover the Benefits of the NVIDIA DGX Platform

NVIDIA DGX is the proven standard on which enterprise AI is built.

[Learn More](https://www.nvidia.com/en-us/data-center/dgx-platform.md)

### NVIDIA DGX SuperPOD Documentation

Access deployment and management guides for NVIDIA DGX SuperPOD.

[Learn More](https://docs.nvidia.com/dgx-superpod/index.html)

## Contact Us To Learn More About DGX

Welcome back.
Not you? Log Out

Welcome
back. Not you? Clear form