**Solutions: AI Workflows**

# Multi-Camera Tracking

Track objects and the customer journey across multiple cameras throughout the store.

[Learn More](https://docs.nvidia.com/mms/text/Multi-Camera_Tracking_toc.html)

## What is Multi-Camera Tracking?

Retail spaces are gaining valuable insight into the movement of objects and customers by applying computer vision AI to many cameras covering multiple physical areas. NVIDIA's customizable [multi-camera tracking (MTMC)](https://www.nvidia.com/en-us/use-cases/ai-powered-multi-camera-tracking.md) workflow gives you a starting point to get your development in gear without having to start from scratch and eliminates months of development time. The workflow also provides a validated path to production for tracking objects across cameras in stores, warehouses, and distribution centers.

[Read More About Multi-Camera Tracking](https://docs.nvidia.com/mms/text/Multi-Camera_Tracking_toc.html)

## Explore the Multi-Camera AI Workflow

This [AI workflow](https://www.nvidia.com/en-us/ai-data-science/ai-workflows.md) uses the [NVIDIA DeepStream SDK](https://developer.nvidia.com/deepstream-sdk), pretrained models, and new state-of-the-art microservices to deliver advanced multi-target, multi-camera (MTMC) capabilities. Developers can now more easily create systems that track objects across multiple cameras throughout a retail store or warehouse.

This MTMC workflow tracks and associates objects across cameras and maintains a unique ID for each object. This ID is tracked through visual embeddings/appearance rather than any personal biometric information, so privacy is fully maintained.

MTMC capabilities help bolster the security of self-checkout and are foundational for fully autonomous stores. The workflow can also be trained to detect anomalous behavior and be deployed and scaled with Kubernetes and managed by Helm.

### The workflow contains:

* Object detection and creation of feature embeddings.
* Multi-camera tracking microservice based on object embedding and spatial temporal information.
* Global ID generation for each uniquely identified object.
* Information storage and output.
* A customizable reference application for deploying in production.
* Guidance on how to train and customize the AI workflow.

The image above shows a visual representation of the Multi-Camera Tracking app end-to-end pipeline. This reference application uses live camera feeds as input; performs object detection, object tracking, streaming analytics, and multi-target multi-camera tracking; provides various aggregated analytics functions as API endpoints; and visualizes the results via a browser-based user interface. Live camera feeds are simulated by streaming video files in RTSP format. Various analytics microservices are connected via Kafka message broker, and processed results are saved in a database for long-term storage.

[Read the Technical Documentation](https://docs.nvidia.com/mms/text/Multi-Camera_Tracking_toc.html)

## Apply for Early Access

Please complete this short application for early access to the multi-camera tracking AI workflow.

Please note that you must be a registered NVIDIA Developer to join the program. Log in using your organizational email address. We cannot accept applications from accounts using Gmail, Yahoo, QQ, or other personal email addresses.

[Apply Now](https://developer.nvidia.com/metropolis-microservices-early-access-form)

## Multi-Target Multi-Camera Tracking Made Easy

[Watch the Video](https://www.nvidia.com/en-us/on-demand/session/gtcfall22-a41369/)

## Key Benefits of the Multi-Camera AI Workflow

### Pretrained Models

Highly accurate models are provided to enable identifying objects and creating a unique global ID based on embeddings/appearance rather than any personal biometric information.

### Multi-Camera Tracking

A state-of-the-art microservice that uses objects’ feature embeddings, along with spatial temporal information, to uniquely identify and associate objects across cameras.

### Flexible Reference Architecture

Delivered through cloud-native microservices, this AI workflow allows for jump-starting development and ease of customization to rapidly create solutions requiring cross-camera object tracking.

## Accelerate the Development of AI Solutions

AI workflows accelerate the path to AI outcomes. The multi-camera AI workflow provides a reference for developers to rapidly get started in creating a flexible and scalable MTMC AI solution.

### Reduce Development Time

Best-in-class AI software streamlines development and deployment of AI solutions.

### Improve Accuracy And Performance

Frameworks and containers are performance-tuned and tested for NVIDIA GPUs.

### Speed Time to Deployment

Access prepackaged, customizable reference applications deployable in the cloud.

### Gain Confidence in AI Outcomes

Cloud-native NVIDIA Metropolis microservices are designed to deploy at scale with Kubernetes and manage with HELM.

## Learn More About AI-Based Workflows for Retail

### The $100B Retail Problem

Read more about how NVIDIA is helping the retail industry tackle its $100 Billion annual shrinkage problem.

[Learn More](https://blogs.nvidia.com/blog/2023/01/12/retail-ai-workflows/)

### Read the Technical Blog

Learn how the retail AI workflows address highly complex application development challenges and provide the initial ‘building blocks’ necessary to build an effective solution.

[Read Now](https://developer.nvidia.com/blog/nvidia-announces-cloud-native-metropolis-microservices-and-retail-ai-workflows-for-theft-prevention/)

### Explore the Retail Shopping Advisor AI Workflow

Leverage generative AI and NVIDIA NIM™ inference microservices to provide a more natural, personalized shopping experience. It’s like putting your best sales associate in front of every customer.

[Learn More](https://www.nvidia.com/en-us/ai-data-science/ai-workflows/retail-shopping-advisor.md)

## Sign up to receive the latest retail AI news from NVIDIA.

[Sign Up](#retail-subscribe-modal)