# NVIDIA DGX B300

The AI factory foundation for AI reasoning.

## Where to Buy

[Get DGX](https://www.nvidia.com/en-us/data-center/get-dgx.md)

[Talk to Us](#contact-us)

[Datasheet](https://resources.nvidia.com/en-us-dgx-systems/dgx-b300-datasheet) | [Specifications](#m-specs) | [Documentation](https://docs.nvidia.com/dgx/)

## Overview

## Accelerating AI for Every Enterprise

NVIDIA DGX™ B300 is the powerhouse for AI innovators, delivering the hyperscaler performance needed to build a modern AI factory. Powered by [NVIDIA Blackwell Ultra](https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture.md) GPUs, DGX B300 boosts dense FP4 performance by 1.5x and attention performance by 2x over DGX B200, all in a new form factor designed to fit seamlessly into the modern data center. Compatible with [NVIDIA MGX](https://nvidianews.nvidia.com/news/nvidia-contributes-blackwell-platform-design-to-open-hardware-ecosystem-accelerating-ai-infrastructure-innovation)™ and traditional enterprise racks and with full-stack software, it simplifies and streamlines AI deployment, enabling any enterprise to run like a hyperscaler.

### Explore NVIDIA DGX B300

Learn how NVIDIA DGX B300 streamlines AI deployment while delivering the computational power needed to handle generative AI workloads.

[Read the Datasheet](https://resources.nvidia.com/en-us-dgx-systems/dgx-b300-datasheet)

### Lilly Deploys Largest AI Factory For Drug Discovery

Lilly’s new NVIDIA DGX SuperPOD™ with DGX B300 systems will enable breakthroughs in genomics, medicine, and molecular design.

[Read the Announcement](https://blogs.nvidia.com/blog/lilly-ai-factory-nvidia-blackwell-dgx-superpod/)

### Features

## An AI Factory for the Era of Reasoning

### Real-Time Powerhouse for Inference and Training

Powered by NVIDIA Blackwell Ultra GPUs, DGX B300 provides enterprises with a single platform to accelerate [large language model (LLM)](https://www.nvidia.com/en-us/glossary/large-language-models.md) inference and training. Delivering 144 petaFLOPS of inference performance, the system enables every business to operate like a hyperscaler.

### Efficient and Sustainable Innovation

With multiple power options to choose from, NVIDIA DGX B300 is designed to be the most energy-efficient AI supercomputer, delivering unmatched energy efficiency and performance per watt.

### Revolutionary Infrastructure Standard

NVIDIA DGX B300 has been redesigned for the modern data center, deployable in NVIDIA MGX racks for the first time. This new industry standard is powering a shift in data center engineering, making it easier than ever to obtain breakthrough performance and efficiency.

## NVIDIA DGX B300 Systems Are Shipping Now

Explore the new features and capabilities, including AC and DC power options, that make DGX B300 easy to integrate into any modern data center, with greater deployment flexibility than ever before.

[Read Technical Brief](https://resources.nvidia.com/en-us-dgx-systems/dgx-b300-technical-brief)

## Specifications

## NVIDIA DGX B300 Specifications

|  |  |
| --- | --- |
| GPUs | 8x NVIDIA Blackwell Ultra SXM |
| CPU | Intel® Xeon® 6776P Processors |
| Total GPU Memory | 2.1 TB |
| Performance | FP4 Tensor Core: 144 PFLOPS | 108 PFLOPS\*  FP8 Tensor Core: 72 PFLOPS\*\* |
| NVIDIA NVLink™ Switch System | 2x |
| NVIDIA NVLink Bandwidth | 14.4 TB/s aggregate bandwidth |
| Networking | 8x OSFP ports serving 8x single-port NVIDIA ConnectX-8 VPI  * Up to 800 Gb/s NVIDIA InfiniBand/Ethernet  2x dual-port QSFP112 NVIDIA BlueField-3 DPU  * Up to 400 Gb/s NVIDIA InfiniBand/Ethernet |
| Management Network | 1GbE onboard NIC with RJ45  1GbE RJ45 Host baseboard management controller (BMC) |
| Storage | OS: 2x 1.9 TB NVMe M.2  Internal storage: 8x 3.84 TB NVMe E1.S |
| Power Consumption | ~14 kW |
| Software | NVIDIA AI Enterprise—Optimized AI software  NVIDIA Mission Control—AI data center operations and orchestration with NVIDIA Run:ai technology  NVIDIA DGX OS—Operating system   Supports Red Hat Enterprise Linux/Rocky/Ubuntu |
| Rack Units | 10U |
| Support | Three-year business-standard hardware and software support |
| \*Specification shown in sparse | dense  \*\*Specification in sparse. Dense is ½ sparse spec shown. | |

## Offers

## Delivering Supercomputing to Every Enterprise

### NVIDIA DGX SuperPOD

**Leadership-class AI infrastructure purpose-built for the unique demands of AI.**

NVIDIA DGX SuperPOD is a turnkey AI data center infrastructure solution that delivers uncompromising performance for every user and workload. Configurable with any NVIDIA DGX system, DGX SuperPOD provides leadership-class accelerated infrastructure with scalable performance for the most demanding AI training and inference workloads, allowing IT to deliver performance without compromise.

[Learn More About NVIDIA DGX SuperPOD](https://www.nvidia.com/en-us/data-center/dgx-superpod.md)

### NVIDIA DGX BasePOD

**The industry standard for AI at scale.**

AI is powering mission-critical use cases in every industry—from healthcare to manufacturing to financial services. NVIDIA DGX BasePOD™ provides the reference architecture on which businesses can build and scale AI infrastructure.

[Learn More About NVIDIA DGX BasePOD](https://www.nvidia.com/en-us/data-center/dgx-basepod.md)

### NVIDIA Mission Control

**Integrated orchestration software for AI factories at scale.**

NVIDIA Mission Control streamlines AI factory operations, delivering instant agility, infrastructure resiliency, and hyperscale efficiency, and accelerates AI experimentation with full-stack software intelligence.

[Run Models, Automate the Essentials](https://www.nvidia.com/en-us/data-center/mission-control.md)

### NVIDIA DGX Enterprise Support

**Maximize the value of your NVIDIA DGX B300.**

NVIDIA Enterprise Services provides support, education, and infrastructure specialists for your NVIDIA DGX infrastructure. With NVIDIA experts available at every step of your AI journey, Enterprise Services can help you get your projects up and running quickly and successfully.

[Learn More About Enterprise Services](https://www.nvidia.com/en-us/data-center/dgx-support.md)

### DLI for NVIDIA DGX

**Exclusive training offers for NVIDIA DGX customers.**

NVIDIA DGX customers can learn how to achieve cutting-edge breakthroughs with AI faster with exclusive technical training offered by the AI experts at [NVIDIA’s Deep Learning Institute (DLI)](https://www.nvidia.com/en-us/training.md).

[Check Out DLI for NVIDIA DGX Training](https://resources.nvidia.com/en-us-dgx-systems/dgx-training-special-offer-2023)

## NVIDIA Blackwell Ultra FAQs

## What is NVIDIA Blackwell Ultra, and how much does inference cost?

NVIDIA Blackwell Ultra is the GPU architecture for the most powerful systems for AI inference available today, including NVIDIA DGX B300 and DGX GB300. Blackwell Ultra systems deliver up to 50x higher throughput per megawatt and up to 35x lower cost per token than NVIDIA Hopper™ for low-latency agentic workloads, through hardware–software codesign, according to [SemiAnalysis InferenceX benchmarks](https://inferencex.semianalysis.com/) (Q1 2026).

## Can NVIDIA Blackwell Ultra-powered systems run DeepSeek-R1 and other reasoning models?

Yes. The large memory spaces available with Blackwell Ultra-powered systems enable DeepSeek-R1 (671B MoE) inference on fewer GPUs with lower tensor parallelism overhead. In [MLPerf Inference v6.0 (April 2026)](https://mlcommons.org/benchmarks/), systems powered by NVIDIA Blackwell Ultra GPUs delivered the highest throughput across the widest range of models and scenarios. On DeepSeek-R1, Blackwell Ultra systems delivered 2.5 million tokens per second—up to 2.7x higher token throughput compared to Blackwell Ultra debut submissions just six months prior, as a result of NVIDIA TensorRT™-LLM software updates.

## What is the NVIDIA Blackwell Ultra token cost for reasoning models like DeepSeek-R1?

NVIDIA Blackwell Ultra delivers AI Inference at $0.24 per million tokens at 102 TPS/user on DeepSeek-R1 using NVIDIA Dynamo TensorRT-LLM and MTP, according to [SemiAnalysis InferenceX benchmarks](https://inferencex.semianalysis.com/) as of April 2026.

## Get Started

### NVIDIA DGX B300 is Available Now

Deploy NVIDIA DGX B300 today on premises, in a colocation facility, or in the cloud through one of our partners.

[Get DGX](https://www.nvidia.com/en-us/data-center/get-dgx.md)

#### Discover the Benefits of the NVIDIA DGX Platform

The NVIDIA DGX platform is the proven standard on which enterprise AI is built.

[Learn More](https://www.nvidia.com/en-us/data-center/dgx-platform.md)

#### Need Help Selecting the Right Product or Partner?

Reach out to an NVIDIA product specialist about your professional needs.

[Talk to Us](#contact-us)

## Contact Us To Learn More About DGX

Welcome back.
Not you? Log Out

Welcome
back. Not you? Clear form

## DGX B300 Quick Specs

|  |  |
| --- | --- |
| GPUs | 8x NVIDIA Blackwell Ultra SXM |
| CPU | Intel® Xeon® 6776P Processors |
| Total GPU Memory | 2.1 TB |
| Performance | FP4 Tensor Core: 144 PFLOPS | 108 PFLOPS\*  FP8 Tensor Core: 72 PFLOPS\*\* |
| Networking | 8x OSFP ports serving 8x single-port NVIDIA® ConnectX®-8 VPI  * > Up to 800 Gb/s NVIDIA InfiniBand/Ethernet  2x dual-port QSFP112 NVIDIA® BlueField®-3 DPU  * > Up to 400 Gb/s NVIDIA InfiniBand/Ethernet |
| \*Specification shown in sparse | dense  \*\*Specification in sparse. Dense is ½ sparse spec shown. | |

[View NVIDIA DGX B300 Datasheet](https://resources.nvidia.com/en-us-dgx-systems/dgx-b300-datasheet)