Implement Deep Learning Applications for NVIDIA GPUs with GPU Coder
GPU Coder™ generates readable and portable CUDA® code that leverages CUDA libraries like cuBLAS and cuDNN from the MATLAB® algorithm, which is then cross-compiled and deployed to NVIDIA® GPUs from the Tesla® to the embedded Jetson™ platform.
The first part of this talk describes how MATLAB is used to design and prototype end-to-end systems that include a deep learning network augmented with computer vision algorithms. You’ll learn about the affordances in MATLAB to access and manage large data sets, as well as pretrained models to quickly get started with deep learning design. Then, you’ll see how distributed and GPU computing capabilities integrated with MATLAB are employed during training, debugging, and verification of the network. Finally, most end-to-end systems need more than just classification: Data needs to be pre- and post-processed before and after classification. The results are often inputs to a downstream control system. These traditional computer vision and control algorithms, written in MATLAB, are used to interface with the deep learning network to build up the end-to-end system.
The second part of this talk focuses on the embedded deployment phase. Using representative examples from automated driving to illustrate the entire workflow, see how GPU Coder automatically analyzes your MATLAB algorithm to (a) partition the MATLAB algorithm between CPU/GPU execution; (b) infer memory dependencies; (c) allocate to the GPU memory hierarchy (including global, local, shared, and constant memories); (d) minimize data transfers and device-synchronizations between CPU and GPU; and (e) finally generate CUDA code that leverages optimized CUDA libraries like cuBLAS and cuDNN to deliver high-performance.
Finally, you’ll see that the generated code is highly optimized with benchmarks that show that deep learning inference performance of the auto-generated CUDA code is ~2.5x faster for mxNet, ~5x faster for Caffe2, and ~7x faster for TensorFlow®.
Watch this talk to learn how to:
1. Access and manage large image sets
2. Visualize networks and gain insight into the training process
3. Import reference networks such as AlexNet and GoogLeNet
4. Automatically generate portable and optimized CUDA code from the MATLAB algorithm
You can find the code examples used in the webinar as a part of the shipping examples for GPU Coder.
Seleziona un sito web
Seleziona un sito web per visualizzare contenuto tradotto dove disponibile e vedere eventi e offerte locali. In base alla tua area geografica, ti consigliamo di selezionare: .
Puoi anche selezionare un sito web dal seguente elenco:
Come ottenere le migliori prestazioni del sito
Per ottenere le migliori prestazioni del sito, seleziona il sito cinese (in cinese o in inglese). I siti MathWorks per gli altri paesi non sono ottimizzati per essere visitati dalla tua area geografica.
- América Latina (Español)
- Canada (English)
- United States (English)
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- United Kingdom (English)
- Australia (English)
- India (English)
- New Zealand (English)
- 日本Japanese (日本語)
- 한국Korean (한국어)