# NVIDIA H100 GPU

Extraordinary performance, scalability, and security for every data center.

[View Datasheet](https://resources.nvidia.com/en-us-hopper-architecture/nvidia-tensor-core-gpu-datasheet)

## An Order-of-Magnitude Leap for Accelerated Computing

The NVIDIA H100 GPU delivers exceptional performance, scalability, and security for every workload. H100 uses breakthrough innovations based on the [NVIDIA Hopper™ architecture](https://www.nvidia.com/en-us/data-center/technologies/hopper-architecture.md) to deliver industry-leading conversational AI, speeding up large language models (LLMs) by 30X. H100 also includes a dedicated Transformer Engine to solve trillion-parameter language models.

[Read NVIDIA H100 Datasheet](https://resources.nvidia.com/en-us-hopper-architecture/nvidia-tensor-core-gpu-datasheet)

## Securely Accelerate Workloads From Enterprise to Exascale

### Up to 4X Higher AI Training on GPT-3

Projected performance subject to change.  GPT-3 175B training A100 cluster: HDR IB network, H100 cluster: NDR IB network | Mixture of Experts (MoE) Training Transformer Switch-XXL variant with 395B parameters on 1T token dataset,  A100 cluster: HDR IB network, H100 cluster: NDR IB network with NVLink Switch System where indicated.

## Transformational AI Training

H100 features fourth-generation Tensor Cores and a Transformer Engine with FP8 precision that provides up to 4X faster training over the prior generation for GPT-3 (175B) models. The combination of fourth-generation NVLink, which offers 900 gigabytes per second (GB/s) of GPU-to-GPU interconnect; NDR Quantum-2 InfiniBand networking, which accelerates communication by every GPU across nodes; PCIe Gen5; and [NVIDIA Magnum IO™](https://www.nvidia.com/en-us/data-center/magnum-io.md) software delivers efficient scalability from small enterprise systems to massive, unified GPU clusters.

Deploying H100 GPUs at data center scale delivers outstanding performance and brings the next generation of exascale high-performance computing (HPC) and trillion-parameter AI within the reach of all researchers.

[Experience NVIDIA AI and NVIDIA H100 on NVIDIA LaunchPad](https://www.nvidia.com/en-us/launchpad/ai/tuning-and-deploying-a-language-model-on-h100.md)

## Real-Time Deep Learning Inference

AI solves a wide array of business challenges, using an equally wide array of neural networks. A great AI inference accelerator has to not only deliver the highest performance but also the versatility to accelerate these networks.

H100 extends NVIDIA’s market-leading inference leadership with several advancements that accelerate inference by up to 30X and deliver the lowest latency. Fourth-generation Tensor Cores speed up all precisions, including FP64, TF32, FP32, FP16, INT8, and now FP8, to reduce memory usage and increase performance while still maintaining accuracy for LLMs.

### Up to 30X Higher AI Inference Performance on the Largest Models

Megatron chatbot inference (530 billion parameters)

Projected performance subject to change. Inference on Megatron 530B parameter model based chatbot for input sequence length=128, output sequence length =20 | A100 cluster: HDR IB network | H100 cluster: NVLink Switch System, NDR IB

### Up to 7X Higher Performance for HPC Applications

Projected performance subject to change. 3D FFT (4K^3) throughput | A100 cluster: HDR IB network | H100 cluster: NVLink Switch System, NDR IB | Genome Sequencing (Smith-Waterman) | 1 A100 | 1 H100

## Exascale High-Performance Computing

The NVIDIA data center platform consistently delivers performance gains beyond Moore’s law. And H100’s new breakthrough AI capabilities further amplify the power of HPC+AI to accelerate time to discovery for scientists and researchers working on solving the world’s most important challenges.

H100 triples the floating-point operations per second (FLOPS) of double-precision Tensor Cores, delivering 60 teraflops of FP64 computing for HPC. AI-fused HPC applications can also leverage H100’s TF32 precision to achieve one petaflop of throughput for single-precision matrix-multiply operations, with zero code changes.

H100 also features new DPX instructions that deliver 7X higher performance over A100 and 40X speedups over CPUs on dynamic programming algorithms such as Smith-Waterman for DNA sequence alignment and protein alignment for protein structure prediction.

[Review Latest GPU Performance on HPC Applications](https://developer.nvidia.com/hpc-application-performance)

DPX instructions comparison NVIDIA HGX™ H100 4-GPU vs dual socket 32-core IceLake.

## Accelerated Data Analytics

Data analytics often consumes the majority of time in AI application development. Since large datasets are scattered across multiple servers, scale-out solutions with commodity CPU-only servers get bogged down by a lack of scalable computing performance.

Accelerated servers with H100 deliver the compute power—along with 3 terabytes per second (TB/s) of memory bandwidth per GPU and scalability with NVLink and NVSwitch™—to tackle data analytics with high performance and scale to support massive datasets. Combined with NVIDIA Quantum-2 InfiniBand, Magnum IO software, GPU-accelerated Spark 3.0, and [NVIDIA RAPIDS™](https://www.nvidia.com/en-us/deep-learning-ai/software/rapids.md), the NVIDIA data center platform is uniquely able to accelerate these huge workloads with higher performance and efficiency.

## Enterprise-Ready Utilization

IT managers seek to maximize utilization (both peak and average) of compute resources in the data center. They often employ dynamic reconfiguration of compute to right-size resources for the workloads in use.

H100 with MIG lets infrastructure managers standardize their GPU-accelerated infrastructure while having the flexibility to provision GPU resources with greater granularity to securely provide developers the right amount of accelerated compute and optimize usage of all their GPU resources.

[Learn More About MIG](https://www.nvidia.com/en-us/technologies/multi-instance-gpu.md)

## Built-In Confidential Computing

Traditional Confidential Computing solutions are CPU-based, which is too limited for compute-intensive workloads such as AI at scale. NVIDIA Confidential Computing is a built-in security feature of the [NVIDIA Hopper architecture](https://www.nvidia.com/en-us/data-center/technologies/hopper-architecture.md) that made H100 the world’s first accelerator with these capabilities. With [NVIDIA Blackwell](https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture.md), the opportunity to exponentially increase performance while protecting the confidentiality and integrity of data and applications in use has the ability to unlock data insights like never before. Customers can now use a hardware-based trusted execution environment (TEE) that secures and isolates the entire workload in the most performant way.

[Learn More About NVIDIA Confidential Computing](https://www.nvidia.com/en-us/data-center/solutions/confidential-computing.md)

## Exceptional Performance for Large-Scale AI and HPC

The Hopper GPU will power the NVIDIA Grace Hopper™ CPU+GPU architecture, purpose-built for terabyte-scale accelerated computing and providing 10X higher performance on large-model AI and HPC. The NVIDIA Grace CPU leverages the flexibility of the Arm® architecture to create a CPU and server architecture designed from the ground up for accelerated computing. The Hopper GPU is paired with the Grace CPU using NVIDIA’s ultra-fast chip-to-chip interconnect, delivering 900GB/s of bandwidth, 7X faster than PCIe Gen5. This innovative design will deliver up to 30X higher aggregate system memory bandwidth to the GPU compared to today's fastest servers and up to 10X higher performance for applications running terabytes of data.

[Learn More About NVIDIA Grace](https://www.nvidia.com/en-us/data-center/grace-cpu.md)

## Supercharge Large Language Model Inference With H100 NVL

For LLMs up to 70 billion parameters (Llama 2 70B), the PCIe-based NVIDIA H100 NVL with NVLink bridge utilizes Transformer Engine, NVLink, and 188GB HBM3 memory to provide optimum performance and easy scaling across any data center, bringing LLMs to the mainstream. Servers equipped with H100 NVL GPUs increase Llama 2 70B performance up to 5x over NVIDIA A100 systems while maintaining low latency in power-constrained data center environments.

## Enterprise-Ready: AI Software Streamlines Development and Deployment

NVIDIA H100 NVL is bundled with a five-year [NVIDIA Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise.md) subscription. This subscription includes NVIDIA AI Enterprise to simplify the way you build an enterprise AI-ready platform. H100 accelerates AI development and deployment for production-ready generative AI solutions, including computer vision, speech AI, retrieval augmented generation (RAG), and more. NVIDIA AI Enterprise includes [NVIDIA NIM](https://www.nvidia.com/en-us/ai.md)TM, a set of easy-to-use microservices designed to speed up enterprise generative AI deployment. Together, deployments have enterprise-grade security, manageability, stability, and support. This results in performance-optimized AI solutions that deliver faster business value and actionable insights.

[Activate Your NVIDIA AI Enterprise License](https://www.nvidia.com/en-us/data-center/activate-license.md)

## Product Specifications

|  | H100 SXM | H100 NVL |
| --- | --- | --- |
| FP64 | 34 teraFLOPS | 30 teraFLOPs |
| FP64 Tensor Core | 67 teraFLOPS | 60 teraFLOPs |
| FP32 | 67 teraFLOPS | 60 teraFLOPs |
| TF32 Tensor Core\* | 989 teraFLOPS | 835 teraFLOPs |
| BFLOAT16 Tensor Core\* | 1,979 teraFLOPS | 1,671 teraFLOPS |
| FP16 Tensor Core\* | 1,979 teraFLOPS | 1,671 teraFLOPS |
| FP8 Tensor Core\* | 3,958 teraFLOPS | 3,341 teraFLOPS |
| INT8 Tensor Core\* | 3,958 TOPS | 3,341 TOPS |
| GPU Memory | 80GB | 94GB |
| GPU Memory Bandwidth | 3.35TB/s | 3.9TB/s |
| Decoders | 7 NVDEC  7 JPEG | 7 NVDEC  7 JPEG |
| Max Thermal Design Power (TDP) | Up to 700W (configurable) | 350-400W (configurable) |
| Multi-Instance GPUs | Up to 7 MIGS @ 10GB each | Up to 7 MIGS @ 12GB each |
| Form Factor | SXM | PCIe  dual-slot air-cooled |
| Interconnect | NVIDIA NVLink™: 900GB/s   PCIe Gen5: 128GB/s | NVIDIA NVLink: 600GB/s   PCIe Gen5: 128GB/s |
| Server Options | NVIDIA HGX H100 Partner and NVIDIA- Certified Systems™ with 4 or 8 GPUs  NVIDIA DGX H100 with 8 GPUs | Partner and NVIDIA-Certified Systems with 1–8 GPUs |
| NVIDIA AI Enterprise | Add-on | Included |

\* With sparsity

## NVIDIA H100 GPU FAQs

## What is the cost per million tokens on NVIDIA H100?

NVIDIA H100 delivers inference at approximately $0.09 per million tokens at 66 TPS/user for GPT-OSS-120B using vLLM, according to [SemiAnalysis InferenceX benchmarks](https://inferencex.semianalysis.com/) as of April 2026.

## What is the inference TCO for NVIDIA B200 compared to H100?

The most important metric for AI inference TCO is the cost per token, or the price-performance actually delivered. According to [SemiAnalysis InferenceX benchmarks](https://inferencex.semianalysis.com/) as of April 2026, NVIDIA B200 delivers inference at approximately $0.02 per million tokens at 55 TPS/user for GPT-OSS-120B using NVIDIA TensorRT™-LLM—roughly 4.5x cheaper than H100 at $0.09 per million tokens with vLLM.

Take a deep dive into the NVIDIA Hopper architecture.

[Read Whitepaper](https://resources.nvidia.com/en-us-hopper-architecture/nvidia-h100-tensor-c)