Learn how to design, develop, and deploy computer vision and deep learning automotive applications on to GPUs, whether on your desktop, a cluster, or on embedded Tegra platforms, including Jetson TK1/TX1/TX2 and DRIVE PX boards. The workflow starts with algorithm design in MATLAB, which enjoys universal appeal among engineers and scientists because of its expressive power and ease of use. The algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB. Next, those networks are trained using MATLAB's GPU and parallel computing support either on the desktop, a local compute cluster, or in the cloud. Finally, a new compiler (released in September 2017) auto-generates portable and optimized CUDA code from the MATLAB algorithm, which is then cross-compiled and deployed to the Tegra board. We present benchmarks that show the superior performance of the auto-generated CUDA code (~7x faster than TensorFlow).
Hall: Hall D