**This is a Deep Learning Institute hands-on training lab, which will require a "Conference & Training Pass." You will also need to bring your own laptop. To prepare for the lab, please follow instructions here: https://www.nvidia.com/content/dam/en-zz/Solutions/gtc/whitepages/DLI_Lab_Instructions.pdf
This lab will show three approaches for deployment. The first approach is to directly use inference functionality within a deep learning framework, in this case NVIDIA DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe, but this time through its Python API. The final approach is to use the NVIDIA TensorRT™, which will automatically create an optimized inference run-time from a trained Caffe model and network description file. In this lab, you will learn about the role of batch size in inference performance, as well as various optimizations that can be made in the inference process. You will also explore inference for a variety of different DNN architectures trained in other DLI labs.