# Get Started With NVIDIA Triton

Find the right license to deploy, run, and scale AI for any application on any platform.

## NVIDIA Triton Licensing Options

|  | GitHub For individuals looking to get access to Triton Inference Server open-source code for development. | NVIDIA NGC For individuals looking to access free Triton Inference Server containers for development. | NVIDIA AI Enterprise For enterprises looking to purchase Triton for production. |
| --- | --- | --- | --- |
| Features | [Access Code](https://github.com/triton-inference-server/server) | [Get Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver) | [Contact Sales](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/contact-sales.md) |
| NVIDIA Triton™ Inference Server |  |  |  |
| Custom builds (Windows, NVIDIA® Jetson™), PyTriton |  |  |  |
| Prebuilt Docker container (version dependencies: CUDA®, framework |  |  |  |
| Triton Management Service (model orchestration for large-scale deployments) |  |  |  |
| AI Workflows and reference architectures for common AI use cases |  |  |  |
| Workload and infrastructure management features |  |  |  |
| Business-standard support, including:  * Unlimited technical support cases accepted via the customer portal and phone 24/7 * Escalation support during local business hours (9:00 a.m.–5:00 p.m., Monday–Friday) * Timely resolution provided by NVIDIA experts and engineers * Security fixes and priority notifications * Production branches that ensure API stability * Three years of long-term support |  |  |  |
| Hands-on NVIDIA LaunchPad labs |  |  | [Try Now](https://www.nvidia.com/en-us/launchpad/ai/inference.md) |
|  | [Access Code](https://github.com/triton-inference-server/server) | [Get Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver) | [Contact Sales](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/contact-sales.md) |

## FAQs

## What is NVIDIA Triton Inference Server?

NVIDIA Triton Inference Server, or Triton for short, is an open-source inference serving software. It lets teams deploy, run, and scale AI models from any framework (TensorFlow, NVIDIA TensorRT™, PyTorch, ONNX, XGBoost, Python, custom, and more) on any GPU- or CPU-based infrastructure (cloud, data center, or edge). For more information, please visit the [Triton webpage](https://www.nvidia.com/en-us/ai-data-science/products/triton-inference-server.md).

## When should I use the Triton Model Analyzer?

[Triton Model Analyzer](https://github.com/triton-inference-server/model_analyzer) is an offline tool for optimizing inference deployment configurations (batch size, number of model instances, etc.) for throughput, latency, and/or memory constraints on the target GPU or CPU. It supports analysis of a single model, model ensembles, and multiple concurrent models.

## How can customers get enterprise support for Triton?

Triton is included with [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise-suite.md), an end-to-end AI software platform with enterprise-grade support, security stability, and manageability. NVIDIA AI Enterprise includes [Business Standard Support](https://www.nvidia.com/en-us/data-center/products/ai-enterprise-suite/support.md) that provides access to NVIDIA AI experts, customer training, knowledge base resources, and more. Additional enterprise support and services are also available, including business-critical support, dedicated technical account manager, training, and professional services. For more information, please visit the [Enterprise Support and Services User Guide](https://docscontent.nvidia.com/9c/a7/6f07c5864ddf88bd73452d3a2953/enterprise-support-services-user-guide.pdf).

## Is there a Triton lab in NVIDIA Launchpad?

Yes, there are several labs that use Triton in [NVIDIA Launchpad](https://www.nvidia.com/en-us/launchpad/ai/inference.md).

[NVIDIA LaunchPad](https://www.nvidia.com/en-us/launchpad.md) is a program that provides users short-term access to Enterprise NVIDIA Hardware and Software via a web browser. Select from a large catalog of hands-on labs to experience solutions surrounding use cases from AI and data science to 3D design and infrastructure optimization. Enterprises can immediately tap into the necessary hardware and software stacks on private hosted infrastructure.

## Is Triton available from cloud service providers?

Yes, Triton is the top ecosystem choice for AI inference and model deployment. Triton is available in [AWS](https://aws.amazon.com/marketplace/pp/prodview-ozgjkov6vq3l6?sr=0-1&ref_=beagle&applicationId=AWSMPContessa), [Microsoft Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nvidia.nvidia-ai-enterprise?tab=Overview), and [Google Cloud](https://console.cloud.google.com/marketplace/product/nvidia/nvidia-ai-enterprise-vmi?project=nvidia-ngc-public) marketplaces with NVIDIA AI Enterprise. It’s also available in [Alibaba Cloud](https://www.alibabacloud.com/help/en/pai/user-guide/built-in-processors#section-l3c-0f8-ybc), [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/marketplace/pp/prodview-mzfjpok66eclw), [Amazon Elastic Container Service (ECS)](https://aws.amazon.com/marketplace/pp/prodview-mzfjpok66eclw), [Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/triton.html), [Google Kubernetes Engine (GKE)](https://console.cloud.google.com/marketplace/product/nvidia-ngc-public/triton-inference-server), [Google Vertex AI](https://cloud.google.com/vertex-ai/docs/predictions/using-nvidia-triton), [HPE Ezmeral](https://www.hpe.com/us/en/software/marketplace/nvidia-triton.html), [Microsoft Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/), [Azure Machine Learning](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-with-triton?tabs=endpoint), and [Oracle Cloud Infrastructure Data Science Platform](https://www.oracle.com/artificial-intelligence/data-science/).

Stay up to date on the latest AI inference news from NVIDIA.

[Sign Up](https://www.nvidia.com/en-us/deep-learning-ai/triton-tensorrt-newsletter.md)

## Contact Us

Welcome back.
Not you? Log Out

Welcome
back. Not you? Clear form