Main Content

Workflow for Deep Learning C/C++ Code Generation for Simulink Models

With Simulink® Coder™, you can generate code from deep learning neural networks you design and implement using the Deep Learning Toolbox™. To use Simulink Coder to generate code for deep learning networks, you must also install the MATLAB® Coder Interface for Deep Learning. Deep learning uses neural networks to learn useful representations of features directly from data. You can obtain a pretrained neural network or train one yourself using the Deep Learning Toolbox. For more information, see Retrain Neural Network to Classify New Images (Deep Learning Toolbox) and Pretrained Deep Neural Networks (Deep Learning Toolbox).

Implement the trained neural network in Simulink by using blocks from the Deep Neural Networks library or by using a MATLAB Function block. When implementing the trained neural network with a MATLAB Function block, use the coder.loadDeepLearningNetwork to load a trained deep learning network and use the object functions of the network object to obtain the desired responses. The network must be supported for code generation. See Networks and Layers Supported for Code Generation.

You can generate C++ code that targets an embedded platform that uses an Intel® processor or an ARM® processor. The generated code calls the Intel Math Kernel Library for Deep Neural Networks (MKL-DNN) or the ARM Compute Library to apply high performance. The hardware and software requirements depend on the target platform. To apply these libraries, in the Model Configuration Parameters dialog box, set these parameter settings.

PaneParameterSetting
Simulation TargetLanguageC++
Simulation TargetTarget libraryMKL-DNN
Code GenerationLanguageC++
InterfaceTarget library
  • MKL-DNN

  • ARM Compute

For an example that uses the MKL-DNN library, see Code Generation for Deep Learning Simulink Model That Performs Lane and Vehicle Detection.

You can also generate generic C or C++ code that does not depend on third-party libraries. To generate generic C or C++ code, set these parameter settings.

PaneParameterSetting
Simulation TargetLanguageC or C++
Code GenerationLanguageC or C++
InterfaceTarget libraryNone

For an example, see Generate Generic C/C++ for Sequence-to-Sequence Deep Learning Simulink Models.

Deep learning models typically work on large sets of labeled data. Performing inference on these models is computationally intensive, consuming a significant amount of memory. You can use pruning in combination with network quantization to reduce the inference time and memory footprint of the deep learning network, making it easier to deploy to low-power microcontrollers and FPGAs. For more information, see:

To perform quantization, you must install the Deep Learning Toolbox Model Quantization Library support package.

Related Topics