# dlnetwork

Deep learning network for custom training loops

## Description

A `dlnetwork` object enables support for custom training loops using automatic differentiation.

Tip

For most deep learning tasks, you can use a pretrained network and adapt it to your own data. For an example showing how to use transfer learning to retrain a convolutional neural network to classify a new set of images, see Train Deep Learning Network to Classify New Images. Alternatively, you can create and train networks from scratch using `layerGraph` objects with the `trainNetwork` and `trainingOptions` functions.

If the `trainingOptions` function does not provide the training options that you need for your task, then you can create a custom training loop using automatic differentiation. To learn more, see Define Deep Learning Network for Custom Training Loops.

## Creation

### Syntax

``dlnet = dlnetwork(layers)``
``dlnet = dlnetwork(layers,dlX1,...,dlXn)``
``dlnet = dlnetwork(layers,'Initialize',tf)``

### Description

example

````dlnet = dlnetwork(layers)` converts the network layers specified in `layers` to an initialized `dlnetwork` object representing a deep neural network for use with custom training loops. `layers` can be a `LayerGraph` object or a `Layer` array. `layers` must contain an input layer. An initialized `dlnetwork` object is ready for training. The learnable parameters and state values of `dlnet` are initialized for training with initial values based on the input size defined by the network input layer.```

example

````dlnet = dlnetwork(layers,dlX1,...,dlXn)` creates an initialized `dlnetwork` object using example inputs `dlX1,...,dlXn`. The learnable parameters and state values of `dlnet` are initialized with initial values based on the input size and format defined by the example inputs. Use this syntax to create an initialized `dlnetwork` with inputs that are not connected to an input layer. ```

example

````dlnet = dlnetwork(layers,'Initialize',tf)` specifies whether to return an initialized or uninitialized `dlnetwork`. Use this syntax to create an uninitialized network.An uninitialized network has unset, empty values for learnable and state parameters and is not ready for training. You must initialize an uninitialized `dlnetwork` before you can use it. Create an uninitialized network when you want to defer initialization to a later point. You can use uninitialized `dlnetwork` objects to create complex networks using intermediate building blocks that you then connect together, for example, using Deep Learning Network Composition workflows. You can initialize an uninitialized `dlnetwork` using the `initialize` function. ```

### Input Arguments

expand all

Network layers, specified as a `LayerGraph` object or as a `Layer` array.

If `layers` is a `Layer` array, then the `dlnetwork` function connects the layers in series.

The network layers must not contain output layers. When training the network, calculate the loss separately.

For a list of layers supported by `dlnetwork`, see Supported Layers.

Example network inputs, specified as formatted `dlarray` objects. The software propagates the example inputs through the network to determine the appropriate sizes and formats of the learnable and state parameters of the `dlnetwork`.

Example inputs must be formatted `dlarray` objects. When `layers` is a `Layer` array, provide example inputs in the same order that the layers that require inputs appear in the `Layer` array. When `layers` is a `LayerGraph` object, provide example inputs in the same order as the layers that require inputs appear in the `Layers` property of the `LayerGraph`.

Example inputs are not supported when `tf` is false.

Flag to return initialized `dlnetwork`, specified as a numeric or logical `1` (`true`) or `0` (`false`).

If `tf` is `true` or `1`, learnable and state parameters of `dlnet` are initialized with initial values for training, according to the network input layer or the example inputs provided.

If `tf` is false, learnable and state parameters are not initialized. Before you use an uninitialized network, you must first initialize it using the `initialize` function. Example inputs are not supported when `tf` is false.

## Properties

expand all

Network layers, specified as a `Layer` array.

Layer connections, specified as a table with two columns.

Each table row represents a connection in the layer graph. The first column, `Source`, specifies the source of each connection. The second column, `Destination`, specifies the destination of each connection. The connection sources and destinations are either layer names or have the form `'layerName/IOName'`, where `'IOName'` is the name of the layer input or output.

Data Types: `table`

Network learnable parameters, specified as a table with three columns:

• `Layer` – Layer name, specified as a string scalar.

• `Parameter` – Parameter name, specified as a string scalar.

• `Value` – Value of parameter, specified as a `dlarray` object.

The network learnable parameters contain the features learned by the network. For example, the weights of convolution and fully connected layers.

Data Types: `table`

Network state, specified as a table.

The network state is a table with three columns:

• `Layer` – Layer name, specified as a string scalar.

• `Parameter` – Parameter name, specified as a string scalar.

• `Value` – Value of parameter, specified as a `dlarray` object.

The network state contains information remembered by the network between iterations. For example, the state of LSTM and batch normalization layers.

During training or inference, you can update the network state using the output of the `forward` and `predict` functions.

Data Types: `table`

Network input layer names, specified as a cell array of character vectors.

Data Types: `cell`

Network output layer names, specified as a cell array of character vectors. This property includes all layers with disconnected outputs. If a layer has multiple outputs, then the disconnected outputs are specified as `'layerName/outputName'`.

Data Types: `cell`

Flag for initialized network, specified as `0` or `1`.

If `Initialized` is `0`, the network is not initialized. You must initialize the network before you can use it. Initialize the network using the `initialize` function.

If `Initialized` is `1`, the network is initialized and can be used for training and inference. If you change the values of learnable parameters — for example, during training — the value of `Initialized` remains `1`.

Data Types: `logical`

## Object Functions

 `forward` Compute deep learning network output for training `predict` Compute deep learning network output for inference `initialize` Initialize learnable and state parameters of a `dlnetwork` `layerGraph` Graph of network layers for deep learning `setL2Factor` Set L2 regularization factor of layer learnable parameter `setLearnRateFactor` Set learn rate factor of layer learnable parameter `getLearnRateFactor` Get learn rate factor of layer learnable parameter `getL2Factor` Get L2 regularization factor of layer learnable parameter

## Examples

collapse all

To implement a custom training loop for your network, first convert it to a `dlnetwork` object. Do not include output layers in a `dlnetwork` object. Instead, you must specify the loss function in the custom training loop.

Load a pretrained GoogLeNet model using the `googlenet` function. This function requires the Deep Learning Toolbox™ Model for GoogLeNet Network support package. If this support package is not installed, then the function provides a download link.

`net = googlenet;`

Convert the network to a layer graph and remove the layers used for classification using `removeLayers`.

```lgraph = layerGraph(net); lgraph = removeLayers(lgraph,["prob" "output"]);```

Convert the network to a `dlnetwork` object.

`dlnet = dlnetwork(lgraph)`
```dlnet = dlnetwork with properties: Layers: [142x1 nnet.cnn.layer.Layer] Connections: [168x2 table] Learnables: [116x3 table] State: [0x3 table] InputNames: {'data'} OutputNames: {'loss3-classifier'} Initialized: 1 ```

Use example inputs to create a multi-input `dlnetwork` that is ready for training. The software propagates the example inputs through the network to determine the appropriate sizes and formats of the learnable and state parameters of the `dlnetwork`.

Define the network architecture. Construct a network with two branches. The network takes two inputs, with one input per branch. Connect the branches using an addition layer.

```numFilters = 24; layersBranch1 = [ convolution2dLayer(3,6*numFilters,'Padding','same','Stride',2,'Name','conv1Branch1') groupNormalizationLayer('all-channels','Name','gn1Branch1') reluLayer('Name','relu1Branch1') convolution2dLayer(3,numFilters,'Padding','same','Name','conv2Branch1') groupNormalizationLayer('channel-wise','Name','gn2Branch1') additionLayer(2,'Name','add') reluLayer('Name','reluCombined') fullyConnectedLayer(10,'Name','fc') softmaxLayer('Name','sm')]; layersBranch2 = [ convolution2dLayer(1,numFilters,'Name','convBranch2') groupNormalizationLayer('all-channels','Name','gnBranch2')]; lgraph = layerGraph(layersBranch1); lgraph = addLayers(lgraph,layersBranch2); lgraph = connectLayers(lgraph,'gnBranch2','add/in2'); ```

Create example network inputs of the same size format as typical network inputs. For both inputs, use a batch size of 32. Use an input of size 64-by-64 with three channels for the input to the layer `convBranch1`. Use an input of size 64-by-64 with 18 channels for the input for the input to the layer `convBranch2`.

```dlX1 = dlarray(rand([64 64 3 32]),"SSCB"); dlX2 = dlarray(rand([32 32 18 32]),"SSCB");```

Create the `dlnetwork`. Provide the inputs in the same order that the unconnected layers appear in the `Layers` property of `lgraph`.

`dlnet = dlnetwork(lgraph,dlX1,dlX2);`

Check that the network is initialized and ready for training.

`dlnet.Initialized`
```ans = 1 ```

This example shows how to train a network that classifies handwritten digits with a custom learning rate schedule.

If `trainingOptions` does not provide the options you need (for example, a custom learning rate schedule), then you can define your own custom training loop using automatic differentiation.

This example trains a network to classify handwritten digits with the time-based decay learning rate schedule: for each iteration, the solver uses the learning rate given by ${\rho }_{\mathit{t}}=\frac{{\rho }_{0}}{1+\mathit{k}\text{\hspace{0.17em}}\mathit{t}}$, where t is the iteration number, ${\rho }_{0}$ is the initial learning rate, and k is the decay.

Load the digits data as an image datastore using the `imageDatastore` function and specify the folder containing the image data.

```dataFolder = fullfile(toolboxdir('nnet'),'nndemos','nndatasets','DigitDataset'); imds = imageDatastore(dataFolder, ... 'IncludeSubfolders',true, .... 'LabelSource','foldernames');```

Partition the data into training and validation sets. Set aside 10% of the data for validation using the `splitEachLabel` function.

`[imdsTrain,imdsValidation] = splitEachLabel(imds,0.9,'randomize');`

The network used in this example requires input images of size 28-by-28-by-1. To automatically resize the training images, use an augmented image datastore. Specify additional augmentation operations to perform on the training images: randomly translate the images up to 5 pixels in the horizontal and vertical axes. Data augmentation helps prevent the network from overfitting and memorizing the exact details of the training images.

```inputSize = [28 28 1]; pixelRange = [-5 5]; imageAugmenter = imageDataAugmenter( ... 'RandXTranslation',pixelRange, ... 'RandYTranslation',pixelRange); augimdsTrain = augmentedImageDatastore(inputSize(1:2),imdsTrain,'DataAugmentation',imageAugmenter);```

To automatically resize the validation images without performing further data augmentation, use an augmented image datastore without specifying any additional preprocessing operations.

`augimdsValidation = augmentedImageDatastore(inputSize(1:2),imdsValidation);`

Determine the number of classes in the training data.

```classes = categories(imdsTrain.Labels); numClasses = numel(classes);```

Define Network

Define the network for image classification.

```layers = [ imageInputLayer(inputSize,'Normalization','none','Name','input') convolution2dLayer(5,20,'Name','conv1') batchNormalizationLayer('Name','bn1') reluLayer('Name','relu1') convolution2dLayer(3,20,'Padding','same','Name','conv2') batchNormalizationLayer('Name','bn2') reluLayer('Name','relu2') convolution2dLayer(3,20,'Padding','same','Name','conv3') batchNormalizationLayer('Name','bn3') reluLayer('Name','relu3') fullyConnectedLayer(numClasses,'Name','fc') softmaxLayer('Name','softmax')]; lgraph = layerGraph(layers);```

Create a `dlnetwork` object from the layer graph.

`dlnet = dlnetwork(lgraph)`
```dlnet = dlnetwork with properties: Layers: [12×1 nnet.cnn.layer.Layer] Connections: [11×2 table] Learnables: [14×3 table] State: [6×3 table] InputNames: {'input'} OutputNames: {'softmax'} ```

Create the function `modelGradients`, listed at the end of the example, that takes a `dlnetwork` object, a mini-batch of input data with corresponding labels and returns the gradients of the loss with respect to the learnable parameters in the network and the corresponding loss.

Specify Training Options

Train for ten epochs with a mini-batch size of 128.

```numEpochs = 10; miniBatchSize = 128;```

Specify the options for SGDM optimization. Specify an initial learn rate of 0.01 with a decay of 0.01, and momentum 0.9.

```initialLearnRate = 0.01; decay = 0.01; momentum = 0.9;```

Train Model

Create a `minibatchqueue` object that processes and manages mini-batches of images during training. For each mini-batch:

• Use the custom mini-batch preprocessing function `preprocessMiniBatch` (defined at the end of this example) to convert the labels to one-hot encoded variables.

• Format the image data with the dimension labels `'SSCB'` (spatial, spatial, channel, batch). By default, the `minibatchqueue` object converts the data to `dlarray` objects with underlying type `single`. Do not add a format to the class labels.

• Train on a GPU if one is available. By default, the `minibatchqueue` object converts each output to a `gpuArray` if a GPU is available. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Support by Release (Parallel Computing Toolbox).

```mbq = minibatchqueue(augimdsTrain,... 'MiniBatchSize',miniBatchSize,... 'MiniBatchFcn',@preprocessMiniBatch,... 'MiniBatchFormat',{'SSCB',''});```

Initialize the training progress plot.

```figure lineLossTrain = animatedline('Color',[0.85 0.325 0.098]); ylim([0 inf]) xlabel("Iteration") ylabel("Loss") grid on```

Initialize the velocity parameter for the SGDM solver.

`velocity = [];`

Train the network using a custom training loop. For each epoch, shuffle the data and loop over mini-batches of data. For each mini-batch:

• Evaluate the model gradients, state, and loss using the `dlfeval` and `modelGradients` functions and update the network state.

• Determine the learning rate for the time-based decay learning rate schedule.

• Update the network parameters using the `sgdmupdate` function.

• Display the training progress.

```iteration = 0; start = tic; % Loop over epochs. for epoch = 1:numEpochs % Shuffle data. shuffle(mbq); % Loop over mini-batches. while hasdata(mbq) iteration = iteration + 1; % Read mini-batch of data. [dlX, dlY] = next(mbq); % Evaluate the model gradients, state, and loss using dlfeval and the % modelGradients function and update the network state. [gradients,state,loss] = dlfeval(@modelGradients,dlnet,dlX,dlY); dlnet.State = state; % Determine learning rate for time-based decay learning rate schedule. learnRate = initialLearnRate/(1 + decay*iteration); % Update the network parameters using the SGDM optimizer. [dlnet,velocity] = sgdmupdate(dlnet,gradients,velocity,learnRate,momentum); % Display the training progress. D = duration(0,0,toc(start),'Format','hh:mm:ss'); addpoints(lineLossTrain,iteration,loss) title("Epoch: " + epoch + ", Elapsed: " + string(D)) drawnow end end```

Test Model

Test the classification accuracy of the model by comparing the predictions on the validation set with the true labels.

After training, making predictions on new data does not require the labels. Create `minibatchqueue` object containing only the predictors of the test data:

• To ignore the labels for testing, set the number of outputs of the mini-batch queue to 1.

• Specify the same mini-batch size used for training.

• Preprocess the predictors using the `preprocessMiniBatchPredictors` function, listed at the end of the example.

• For the single output of the datastore, specify the mini-batch format `'SSCB'` (spatial, spatial, channel, batch).

```numOutputs = 1; mbqTest = minibatchqueue(augimdsValidation,numOutputs, ... 'MiniBatchSize',miniBatchSize, ... 'MiniBatchFcn',@preprocessMiniBatchPredictors, ... 'MiniBatchFormat','SSCB');```

Loop over the mini-batches and classify the images using `modelPredictions` function, listed at the end of the example.

`predictions = modelPredictions(dlnet,mbqTest,classes);`

Evaluate the classification accuracy.

```YTest = imdsValidation.Labels; accuracy = mean(predictions == YTest)```
```accuracy = 0.9530 ```

The `modelGradients` function takes a `dlnetwork` object `dlnet`, a mini-batch of input data `dlX` with corresponding labels `Y` and returns the gradients of the loss with respect to the learnable parameters in `dlnet`, the network state, and the loss. To compute the gradients automatically, use the `dlgradient` function.

```function [gradients,state,loss] = modelGradients(dlnet,dlX,Y) [dlYPred,state] = forward(dlnet,dlX); loss = crossentropy(dlYPred,Y); gradients = dlgradient(loss,dlnet.Learnables); loss = double(gather(extractdata(loss))); end```

Model Predictions Function

The `modelPredictions` function takes a `dlnetwork` object `dlnet`, a `minibatchqueue` of input data `mbq`, and the network classes, and computes the model predictions by iterating over all data in the `minibatchqueue` object. The function uses the `onehotdecode` function to find the predicted class with the highest score.

```function predictions = modelPredictions(dlnet,mbq,classes) predictions = []; while hasdata(mbq) dlXTest = next(mbq); dlYPred = predict(dlnet,dlXTest); YPred = onehotdecode(dlYPred,classes,1)'; predictions = [predictions; YPred]; end end```

Mini Batch Preprocessing Function

The `preprocessMiniBatch` function preprocesses a mini-batch of predictors and labels using the following steps:

1. Preprocess the images using the `preprocessMiniBatchPredictors` function.

2. Extract the label data from the incoming cell array and concatenate into a categorical array along the second dimension.

3. One-hot encode the categorical labels into numeric arrays. Encoding into the first dimension produces an encoded array that matches the shape of the network output.

```function [X,Y] = preprocessMiniBatch(XCell,YCell) % Preprocess predictors. X = preprocessMiniBatchPredictors(XCell); % Extract label data from cell and concatenate. Y = cat(2,YCell{1:end}); % One-hot encode labels. Y = onehotencode(Y,1); end```

Mini-Batch Predictors Preprocessing Function

The `preprocessMiniBatchPredictors` function preprocesses a mini-batch of predictors by extracting the image data from the input cell array and concatenate into a numeric array. For grayscale input, concatenating over the fourth dimension adds a third dimension to each image, to use as a singleton channel dimension.

```function X = preprocessMiniBatchPredictors(XCell) % Concatenate. X = cat(4,XCell{1:end}); end```

`net = squeezenet;`

Convert the network to a layer graph, remove the output layer, and convert it to a `dlnetwork` object.

```lgraph = layerGraph(net); lgraph = removeLayers(lgraph,'ClassificationLayer_predictions'); dlnet = dlnetwork(lgraph);```

The `Learnables` property of the `dlnetwork` object is a table that contains the learnable parameters of the network. The table includes parameters of nested layers in separate rows. View the first few rows of the learnables table.

```learnables = dlnet.Learnables; head(learnables)```
```ans=8×3 table Layer Parameter Value __________________ _________ ___________________ "conv1" "Weights" {3x3x3x64 dlarray} "conv1" "Bias" {1x1x64 dlarray} "fire2-squeeze1x1" "Weights" {1x1x64x16 dlarray} "fire2-squeeze1x1" "Bias" {1x1x16 dlarray} "fire2-expand1x1" "Weights" {1x1x16x64 dlarray} "fire2-expand1x1" "Bias" {1x1x64 dlarray} "fire2-expand3x3" "Weights" {3x3x16x64 dlarray} "fire2-expand3x3" "Bias" {1x1x64 dlarray} ```

To freeze the learnable parameters of the network, loop over the learnable parameters and set the learn rate to 0 using the `setLearnRateFactor` function.

```factor = 0; numLearnables = size(learnables,1); for i = 1:numLearnables layerName = learnables.Layer(i); parameterName = learnables.Parameter(i); dlnet = setLearnRateFactor(dlnet,layerName,parameterName,factor); end```

To use the updated learn rate factors when training, you must pass the dlnetwork object to the update function in the custom training loop. For example, use the command

```[dlnet,velocity] = sgdmupdate(dlnet,gradients,velocity); ```

Create an uninitialized `dlnetwork` object without an input layer. Creating an uninitialised `dlnetwork` is useful when you do not yet know the size and format of the network inputs, for example, when the `dlnetwork` is nested inside a custom layer.

Define the network layers. This network has a single input, which is not connected to an input layer.

```layers = [convolution2dLayer(5,20,'Name','conv') batchNormalizationLayer('Name','bn') reluLayer('Name','relu') fullyConnectedLayer(10,'Name','fc') softmaxLayer('Name','sm')];```

Create an uninitialized `dlnetwork`. Set the `Initialize` name-value option to false.

`dlnet = dlnetwork(layers,'Initialize',false);`

Check that the network is not initialized.

`dlnet.Initialized`
```ans = 0 ```

The learnable and state parameters of this network are not initialized for training. To initialize the network, use the `initialize` function.

If you want to use `dlnet` directly in a custom training loop, then you can initialize it by using the `initialize` function and providing an example input.

If you want to use `dlnet` inside a custom layer, then you can take advantage of automatic initialization. If you use the custom layer inside a `dlnetwork`, `dlnet` is initialized when the parent `dlnetwork` is constructed (or when the parent network is initialized if it is constructed as an uninitialized `dlnetwork`). If you use the custom layer inside a network that is trained using the `trainNetwork` function, then `dlnet` is automatically initialized at training time. For more information, see Deep Learning Network Composition.

expand all

expand all