Main Content

Define Custom Deep Learning Layers

Tip

This topic explains how to define custom deep learning layers for your problems. For a list of built-in layers in Deep Learning Toolbox™, see List of Deep Learning Layers.

You can define your own custom deep learning layers for your task. You can specify a custom loss function using a custom output layer and define custom layers with or without learnable and state parameters. After defining a custom layer, you can check that the layer is valid and GPU compatible, and outputs correctly defined gradients.

This topic explains the architecture of deep learning layers and how to define custom layers to use for your tasks.

TypeDescription
Intermediate layer

Define a custom deep learning layer and specify optional learnable parameters and state parameters.

For more information, see Define Custom Deep Learning Intermediate Layers.

For an example showing how to define a custom layer with learnable parameters, see Define Custom Deep Learning Layer with Learnable Parameters. For an example showing how to define a custom layer with multiple inputs, see Define Custom Deep Learning Layer with Multiple Inputs.

Classification output layer

Define a custom classification output layer and specify a loss function.

For more information, see Define Custom Deep Learning Output Layers.

For an example showing how to define a custom classification output layer and specify a loss function, see Define Custom Classification Output Layer.

Regression output layer

Define a custom regression output layer and specify a loss function.

For more information, see Define Custom Deep Learning Output Layers.

For an example showing how to define a custom regression output layer and specify a loss function, see Define Custom Regression Output Layer.

Layer Templates

You can use the following templates to define new layers.

 Intermediate Layer Template

 Classification Output Layer Template

 Regression Output Layer Template

Intermediate Layer Architecture

During training, the software iteratively performs forward and backward passes through the network.

During a forward pass through the network, each layer takes the outputs of the previous layers, applies a function, and then outputs (forward propagates) the results to the next layers. Stateful layers, such as LSTM layers, also update the layer state.

Layers can have multiple inputs or outputs. For example, a layer can take X1, …, XN from multiple previous layers and forward propagate the outputs Z1, …, ZM to subsequent layers.

At the end of a forward pass of the network, the output layer calculates the loss L between the predictions Y and the targets T.

During the backward pass through the network, each layer takes the derivatives of the loss with respect to the outputs of the layer, computes the derivatives of the loss L with respect to the inputs, and then backward propagates the results. If the layer has learnable parameters, then the layer also computes the derivatives of the layer weights (learnable parameters). The layer uses the derivatives of the weights to update the learnable parameters.

The following figure describes the flow of data through a deep neural network and highlights the data flow through a layer with a single input X, a single output Z, and a learnable parameter W.

Network diagram showing the flow of data through a neural network during training.

For more information about custom intermediate layers, see Define Custom Deep Learning Intermediate Layers.

Output Layer Architecture

At the end of a forward pass at training time, an output layer takes the outputs Y of the previous layer (the network predictions) and calculates the loss L between these predictions and the training targets. The output layer computes the derivatives of the loss L with respect to the predictions Y and outputs (backward propagates) results to the previous layer.

The following figure describes the flow of data through a neural network and an output layer.

Network diagram showing the flow of data through a neural network during training.

For more information, see Define Custom Deep Learning Output Layers.

Check Validity of Custom Layer

If you create a custom deep learning layer, then you can use the checkLayer function to check that the layer is valid. The function checks layers for validity, GPU compatibility, correctly defined gradients, and code generation compatibility. To check that a layer is valid, run the following command:

checkLayer(layer,layout)
layer is an instance of the layer and layout is a networkDataLayout object specifying the valid sizes and data formats for inputs to the layer. To check with multiple observations, use the ObservationDimension option. To run the check for code generation compatibility, set the CheckCodegenCompatibility option to 1 (true). For large input sizes, the gradient checks take longer to run. To speed up the check, specify a smaller valid input size.

For more information, see Check Custom Layer Validity.

See Also

| | | | | | | | | |

Related Topics