List of Deep Learning Layer Blocks and Subsystems
This page provides a list of deep learning layer blocks and subsystems in Simulink®. To export a MATLAB® object-based network to a Simulink model that uses deep learning layer blocks and subsystems, use the exportNetworkToSimulink
function. Use layer blocks for networks that have a
small number of learnable parameters and that you intend to deploy to embedded
hardware.
Deep Learning Layer Blocks
The exportNetworkToSimulink
function generates these blocks and
subsystems to represent layers in a network. Each block and subsystem corresponds to a
layer object in MATLAB. For each layer in a network, the function generates the corresponding
block or subsystem. If no corresponding block or subsystem exists, then the function
generates a placeholder subsystem that contains an Assertion (Simulink) block.
Some layer blocks and subsystems have reduced functionality compared to the
corresponding layer objects. The Limitations column in
the tables in this section lists conditions where the blocks and subsystems do not have
parity with the corresponding layer objects. Unless otherwise specified in the Limitations column, the
exportNetworkToSimulink
function throws an error for layer
objects that have unsupported configurations.
For a list of deep learning layer objects in MATLAB, see List of Deep Learning Layers.
Activation Layers
Block | Corresponding Layer Object | Description | Limitations |
---|---|---|---|
Clipped ReLU Layer | clippedReluLayer | A clipped ReLU layer performs a threshold operation, where any input value less than zero is set to zero and any value above the clipping ceiling is set to that clipping ceiling. | |
Leaky ReLU Layer | leakyReluLayer | A leaky ReLU layer performs a threshold operation, where any input value less than zero is multiplied by a fixed scalar. | |
ReLU Layer | reluLayer | A ReLU layer performs a threshold operation to each element of the input, where any value less than zero is set to zero. | |
Sigmoid Layer | sigmoidLayer | A sigmoid layer applies a sigmoid function to the input such that the output is bounded in the interval (0,1). | |
Softmax Layer | softmaxLayer | A softmax layer applies a softmax function to the input. | If you specify a data format that contains spatial
( |
Tanh Layer | tanhLayer | A hyperbolic tangent (tanh) activation layer applies the tanh function on the layer inputs. |
Combination Layers
Block | Corresponding Layer Object | Description | Limitations |
---|---|---|---|
Addition Layer | additionLayer | An addition layer adds inputs from multiple neural network layers element-wise. | The additionLayer object accepts scalar and
vector inputs and expands those inputs to have the same dimensions
as the matrix inputs, but the Addition Layer block
supports expanding only scalar inputs. |
Concatenation Layer | concatenationLayer | A concatenation layer takes inputs and concatenates them along a specified dimension. The inputs must have the same size in all dimensions except the concatenation dimension. | |
Depth Concatenation Layer | depthConcatenationLayer | A depth concatenation layer takes inputs that have the same height and width and concatenates them along the channel dimension. | |
Multiplication Layer | multiplicationLayer | A multiplication layer multiplies inputs from multiple neural network layers element-wise. | The multiplicationLayer object accepts scalar
and vector inputs and expands those inputs to have the same
dimensions as the matrix inputs, but the Multiplication
Layer block supports expanding only scalar
inputs. |
Convolution and Fully Connected Layers
Block | Corresponding Layer Object | Description | Limitations |
---|---|---|---|
Convolution 1D Layer | convolution1dLayer | A 1-D convolutional layer applies sliding convolutional filters to 1-D input. |
|
Convolution 2D Layer | convolution2dLayer | A 2-D convolutional layer applies sliding convolutional filters to 2-D input. | |
Convolution 3D Layer | convolution3dLayer | A 3-D convolutional layer applies sliding cuboidal convolution filters to 3-D input. | |
Fully Connected Layer | fullyConnectedLayer | A fully connected layer multiplies the input by a weight matrix and then adds a bias vector. |
Input Layers
For input layer objects that have the Normalization
property
set to "none"
, the exportNetworkToSimulink
function generates an Inport (Simulink) block.
Block | Corresponding Layer Object | Description | Limitations |
---|---|---|---|
Rescale-Symmetric 1D | featureInputLayer or sequenceInputLayer that has the
Normalization property set to
"rescale-symmetric" | The Rescale-Symmetric 1D block inputs 1-dimensional data to a neural network and rescales the input to be in the range [-1, 1]. |
|
Rescale-Symmetric 2D | imageInputLayer that has the
Normalization property set to
"rescale-symmetric" | The Rescale-Symmetric 2D block inputs 2-dimensional image data to a neural network and rescales the input to be in the range [-1, 1]. | |
Rescale-Symmetric 3D | image3dInputLayer that has the
Normalization property set to
"rescale-symmetric" | The Rescale-Symmetric 3D block inputs 3-dimensional image data to a neural network and rescales the input to be in the range [-1, 1]. | |
Rescale-Zero-One 1D | featureInputLayer or sequenceInputLayer that has the
Normalization property set to
"rescale-zero-one" | The Rescale-Zero-One 1D block inputs 1-dimensional data to a neural network and rescales the input to be in the range [0, 1]. | |
Rescale-Zero-One 2D | imageInputLayer that has the
Normalization property set to
"rescale-zero-one" | The Rescale-Zero-One 2D block inputs 2-dimensional image data to a neural network and rescales the input to be in the range [0, 1]. | |
Rescale-Zero-One 3D | image3dInputLayer that has the
Normalization property set to
"rescale-zero-one" | The Rescale-Zero-One 3D block inputs 3-dimensional image data to a neural network and rescales the input to be in the range [0, 1]. | |
Zerocenter 1D | featureInputLayer or sequenceInputLayer that has the
Normalization property set to
"zerocenter" | The Zerocenter 1D block inputs 1-dimensional
data to a neural network and rescales the input by subtracting
the value of the | |
Zerocenter 2D | imageInputLayer that has the
Normalization property set to
"zerocenter" | The Zerocenter 2D block inputs 2-dimensional
image data to a neural network and rescales the input by
subtracting the value of the | |
Zerocenter 3D | image3dInputLayer that has the
Normalization property set to
"zerocenter" | The Zerocenter 3D block inputs 3-dimensional
image data to a neural network and rescales the input by
subtracting the value of the | |
Zscore 1D | featureInputLayer or sequenceInputLayer that has the
Normalization property set to
"zscore" | The Zscore 1D block inputs 1-dimensional
data to a neural network and rescales the input by subtracting
the value of the | |
Zscore 2D | imageInputLayer that has the
Normalization property set to
"zscore" | The Zscore 2D block inputs 2-dimensional
image data to a neural network and rescales the input by
subtracting the value of the | |
Zscore 3D | image3dInputLayer that has the
Normalization property set to
"zscore" | The Zscore 3D block inputs 3-dimensional
image data to a neural network and rescales the input by
subtracting the value of the |
Exporting networks with input layer objects that have the
SplitComplexInputs
property set to 1
(true
) is not supported.
Normalization Layers
Block | Corresponding Layer Object | Description | Limitations |
---|---|---|---|
Batch Normalization Layer | batchNormalizationLayer | A batch normalization layer normalizes a mini-batch of data for each channel independently. | |
Layer Normalization Layer | layerNormalizationLayer | A layer normalization layer normalizes a mini-batch of data across all channels. | If you set the Data format parameter
to |
Pooling Layers
Block | Corresponding Layer Object | Description | Limitations |
---|---|---|---|
Average Pooling 1D Layer | averagePooling1dLayer | A 1-D average pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the average of each region. |
|
Average Pooling 2D Layer | averagePooling2dLayer | A 2-D average pooling layer performs downsampling by dividing the input into rectangular pooling regions, then computing the average of each region. | |
Average Pooling 3D Layer | averagePooling3dLayer | A 3-D average pooling layer performs downsampling by dividing three-dimensional input into cuboidal pooling regions, then computing the average values of each region. | |
Global Average Pooling 1D Layer | globalAveragePooling1dLayer | A 1-D global average pooling layer performs downsampling by outputting the average of the time or spatial dimensions of the input. | |
Global Average Pooling 2D Layer | globalAveragePooling2dLayer | A 2-D global average pooling layer performs downsampling by computing the mean of the height and width dimensions of the input. | |
Global Average Pooling 3D Layer | globalAveragePooling3dLayer | A 3-D global average pooling layer performs downsampling by computing the mean of the height, width, and depth dimensions of the input. | |
Global Max Pooling 1D Layer | globalMaxPooling1dLayer | A 1-D global max pooling layer performs downsampling by outputting the maximum of the time or spatial dimensions of the input. | |
Global Max Pooling 2D Layer | globalMaxPooling2dLayer | A 2-D global max pooling layer performs downsampling by computing the maximum of the height and width dimensions of the input. | |
Global Max Pooling 3D Layer | globalMaxPooling3dLayer | A 3-D global max pooling layer performs downsampling by computing the maximum of the height, width, and depth dimensions of the input. | |
Max Pooling 1D Layer | maxPooling1dLayer | A 1-D max pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the maximum of each region. | The Layer parameter has
limited support for the |
Max Pooling 2D Layer | maxPooling2dLayer | A 2-D max pooling layer performs downsampling by dividing the input into rectangular pooling regions, then computing the maximum of each region. | |
Max Pooling 3D Layer | maxPooling3dLayer | A 3-D global max pooling layer performs downsampling by computing the maximum of the height, width, and depth dimensions of the input. |
Sequence Layers
Block | Corresponding Layer Object | Description | Limitations |
---|---|---|---|
Flatten Layer | flattenLayer | A flatten layer collapses the spatial dimensions of the input into the channel dimension. | |
GRU Layer | gruLayer | A GRU layer is an RNN layer that learns dependencies between time steps in time-series and sequence data. | The Layer parameter does not accept
|
GRU Projected Layer | gruProjectedLayer | A GRU projected layer is an RNN layer that learns dependencies between time steps in time-series and sequence data using projected learnable weights. | The Layer parameter does not accept
|
LSTM Layer | lstmLayer | An LSTM layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data. The layer performs additive interactions, which can help improve gradient flow over long sequences during training. | The Layer parameter
does not accept |
LSTM Projected Layer | lstmProjectedLayer | An LSTM projected layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data using projected learnable weights. |
Utility Layers
Block | Corresponding Layer Object | Description | Limitations |
---|---|---|---|
Dropout Layer | dropoutLayer | At training time, a dropout layer randomly sets input elements to zero with a given probability. At prediction time, the output of a dropout layer is equal to its input. Because deep learning layer blocks can be
used only for prediction, this block has no effect and serves
only as a conversion of |
Neural ODE Layers
Subsystem | Corresponding Layer Object | Description | Limitations |
---|---|---|---|
Integrator block as ODE solver and ODE network represented as layer blocks | neuralODELayer | A neural ODE layer learns to represent dynamic behavior as a system of ODEs. | The subsystem supports continuous-time integration only. For discrete time integration (for example, for fixed-point conversion applications), replace the integrator block in the subsystem with a discrete-time integrator block. |