Main Content

dlnetwork

Deep learning network for custom training loops

Since R2019b

Description

A dlnetwork object enables support for custom training loops using automatic differentiation.

Tip

For most deep learning tasks, you can use a pretrained neural network and adapt it to your own data. For an example showing how to use transfer learning to retrain a convolutional neural network to classify a new set of images, see Train Deep Learning Network to Classify New Images. Alternatively, you can create and train neural networks from scratch using the trainnet, trainNetwork, and trainingOptions functions.

If the trainingOptions function does not provide the training options that you need for your task, then you can create a custom training loop using automatic differentiation. To learn more, see Define Deep Learning Network for Custom Training Loops.

Creation

Description

example

net = dlnetwork(layers) converts the network layers specified in layers to an initialized dlnetwork object representing a deep neural network for use with custom training loops. layers can be a LayerGraph object or a Layer array. layers must contain an input layer.

An initialized dlnetwork object is ready for training. The learnable parameters and state values of net are initialized for training with initial values based on the input size defined by the network input layer.

example

net = dlnetwork(layers,X1,...,Xn) creates an initialized dlnetwork object using network data layout objects or example inputs X1,...,Xn. The learnable parameters and state values of net are initialized with initial values based on the size and format defined by X1,...,Xn. Use this syntax to create an initialized dlnetwork with inputs that are not connected to an input layer.

example

net = dlnetwork(layers,'Initialize',tf) specifies whether to return an initialized or uninitialized dlnetwork. Use this syntax to create an uninitialized network.

An uninitialized network has unset, empty values for learnable and state parameters and is not ready for training. You must initialize an uninitialized dlnetwork before you can use it. Create an uninitialized network when you want to defer initialization to a later point. You can use uninitialized dlnetwork objects to create complex networks using intermediate building blocks that you then connect together, for example, using Deep Learning Network Composition workflows. You can initialize an uninitialized dlnetwork using the initialize function.

net = dlnetwork(___,'OutputNames',names) also sets the OutputNames property using any of the previous syntaxes. The OutputNames property specifies the layers that return the network outputs. To set the output names, the network must be initialized.

net = dlnetwork(prunableNet) removes filters selected for pruning from the convolution layers of prunableNet and returns a compressed dlnetwork object that has fewer learnable parameters and is smaller in size.

To prune a deep neural network, you require the Deep Learning Toolbox™ Model Quantization Library support package. This support package is a free add-on that you can download using the Add-On Explorer. Alternatively, see Deep Learning Toolbox Model Quantization Library.

Input Arguments

expand all

Network layers, specified as a LayerGraph object or as a Layer array.

If layers is a Layer array, then the dlnetwork function connects the layers in series.

The network layers must not contain output layers. When training the network, calculate the loss separately.

Example network inputs or data layouts, specified as formatted dlarray objects or formatted networkDataLayout objects. The software propagates X1,...Xn through the network to determine the appropriate sizes and formats of the learnable and state parameters of the dlnetwork.

When layers is a Layer array, provide example inputs in the same order that the layers that require inputs appear in the Layer array. When layers is a LayerGraph object, provide example inputs in the same order as the layers that require inputs appear in the Layers property of the LayerGraph.

Example inputs are not supported when tf is false.

Note

Automatic initialization uses only the size and format information of the input data. For initialization that depends on the values on the input data, you must initialize the learnable parameters manually.

Flag to return initialized dlnetwork, specified as a numeric or logical 1 (true) or 0 (false).

If tf is 1, then the software initializes the learnable and state parameters of net with initial values for training, according to the network input layer or the example inputs provided.

If tf is 0, then the software does not initialize the learnable and state parameters. Before you use an uninitialized network, you must first initialize it using the initialize function. Example inputs are not supported when tf is false.

Network for pruning by using first-order Taylor approximation, specified as a TaylorPrunableNetwork object.

Properties

expand all

This property is read-only.

Network layers, specified as a Layer array.

This property is read-only.

Layer connections, specified as a table with two columns.

Each table row represents a connection in the layer graph. The first column, Source, specifies the source of each connection. The second column, Destination, specifies the destination of each connection. The connection sources and destinations are either layer names or have the form 'layerName/IOName', where 'IOName' is the name of the layer input or output.

Data Types: table

Network learnable parameters, specified as a table with three columns:

  • Layer – Layer name, specified as a string scalar.

  • Parameter – Parameter name, specified as a string scalar.

  • Value – Value of parameter, specified as a dlarray object.

The network learnable parameters contain the features learned by the network. For example, the weights of convolution and fully connected layers.

Data Types: table

Network state, specified as a table.

The network state is a table with three columns:

  • Layer – Layer name, specified as a string scalar.

  • Parameter – State parameter name, specified as a string scalar.

  • Value – Value of state parameter, specified as a dlarray object.

Layer states contain information calculated during the layer operation to be retained for use in subsequent forward passes of the layer. For example, the cell state and hidden state of LSTM layers, or running statistics in batch normalization layers.

For recurrent layers, such as LSTM layers, with the HasStateInputs property set to 1 (true), the state table does not contain entries for the states of that layer.

During training or inference, you can update the network state using the output of the forward and predict functions.

Data Types: table

This property is read-only.

Names of the network inputs, specified as a cell array of character vectors.

Network inputs are the input layers and the unconnected inputs of layers.

For input layers and layers with a single input, the input name is the name of the layer. For layers with multiple inputs, the input name is 'layerName/inputName', where layerName is the name of the layer and inputName is the name of the layer input.

Data Types: cell

Names of the network outputs, specified as a cell array of character vectors.

For layers with a single output, the output name is the name of the layer. For layers with multiple outputs, the input name is 'layerName/outputName', where layerName is the name of the layer and outputName is the name of the layer output.

If you do not specify the output names, then the software sets the OutputNames property to the layers with unconnected outputs.

The predict and forward functions, by default, return the data output by the layers given by the OutputNames property.

Data Types: cell

This property is read-only.

Flag for initialized network, specified as 0 (false) or 1 (true).

If Initialized is 0 (false), the network is not initialized. You must initialize the network before you can use it. Initialize the network using the initialize function.

If Initialized is 1 (true), the network is initialized and can be used for training and inference. If you change the values of learnable parameters — for example, during training — the value of Initialized remains 1 (true).

Data Types: logical

Object Functions

predictCompute deep learning network output for inference
forwardCompute deep learning network output for training
initializeInitialize learnable and state parameters of a dlnetwork
layerGraphGraph of network layers for deep learning
setL2FactorSet L2 regularization factor of layer learnable parameter
setLearnRateFactorSet learn rate factor of layer learnable parameter
getLearnRateFactorGet learn rate factor of layer learnable parameter
getL2FactorGet L2 regularization factor of layer learnable parameter
resetStateReset state parameters of neural network
plotPlot neural network architecture
addInputLayerAdd input layer to network
addLayersAdd layers to layer graph or network
removeLayersRemove layers from layer graph or network
connectLayersConnect layers in layer graph or network
disconnectLayersDisconnect layers in layer graph or network
replaceLayerReplace layer in layer graph or network
summaryPrint network summary

Examples

collapse all

To implement a custom training loop for your network, first convert it to a dlnetwork object. Do not include output layers in a dlnetwork object. Instead, you must specify the loss function in the custom training loop.

Load a pretrained GoogLeNet model using the googlenet function. This function requires the Deep Learning Toolbox™ Model for GoogLeNet Network support package. If this support package is not installed, then the function provides a download link.

net = googlenet;

Convert the network to a layer graph and remove the layers used for classification using removeLayers.

lgraph = layerGraph(net);
lgraph = removeLayers(lgraph,["prob" "output"]);

Convert the network to a dlnetwork object.

dlnet = dlnetwork(lgraph)
dlnet = 
  dlnetwork with properties:

         Layers: [142x1 nnet.cnn.layer.Layer]
    Connections: [168x2 table]
     Learnables: [116x3 table]
          State: [0x3 table]
     InputNames: {'data'}
    OutputNames: {'loss3-classifier'}
    Initialized: 1

  View summary with summary.

Use network data layout objects to create a multi-input dlnetwork that is ready for training. The software uses the size and format information to determine the appropriate sizes and formats of the learnable and state parameters of the dlnetwork.

Define the network architecture. Construct a network with two branches. The network takes two inputs, with one input per branch. Connect the branches using an addition layer.

numFilters = 24;

layersBranch1 = [
    convolution2dLayer(3,6*numFilters,Padding="same",Stride=2)
    groupNormalizationLayer("all-channels")
    reluLayer
    convolution2dLayer(3,numFilters,Padding="same")
    groupNormalizationLayer("channel-wise")
    additionLayer(2,Name="add")
    reluLayer
    fullyConnectedLayer(10)
    softmaxLayer];

layersBranch2 = [
    convolution2dLayer(1,numFilters)
    groupNormalizationLayer("all-channels",Name="gnBranch2")];

lgraph = layerGraph(layersBranch1);
lgraph = addLayers(lgraph,layersBranch2);
lgraph = connectLayers(lgraph,"gnBranch2","add/in2");

Create network data layout objects that represent the size and format of typical network inputs. For both inputs, use a batch size of 32. Use an input of size 64-by-64 with three channels for the convolution layer in the first branch. Use an input of size 32-by-32 with 18 channels for the convolution layer in the second branch.

X1 = dlarray(rand([64 64 3 32]),"SSCB");
X2 = dlarray(rand([32 32 18 32]),"SSCB");

Create the dlnetwork. Provide the inputs in the same order that the unconnected layers appear in the Layers property of lgraph.

net = dlnetwork(lgraph,X1,X2);

Check that the network is initialized and ready for training by inspecting the Initialized property of the network.

net.Initialized
ans = logical
   1

This example shows how to train a network that classifies handwritten digits with a custom learning rate schedule.

You can train most types of neural networks using the trainNetwork and trainingOptions functions. If the trainingOptions function does not provide the options you need (for example, a custom learning rate schedule), then you can define your own custom training loop using dlarray and dlnetwork objects for automatic differentiation. For an example showing how to retrain a pretrained deep learning network using the trainNetwork function, see Transfer Learning Using Pretrained Network.

Training a deep neural network is an optimization task. By considering a neural network as a function f(X;θ), where X is the network input, and θ is the set of learnable parameters, you can optimize θ so that it minimizes some loss value based on the training data. For example, optimize the learnable parameters θ such that for a given inputs X with a corresponding targets T, they minimize the error between the predictions Y=f(X;θ) and T.

The loss function used depends on the type of task. For example:

  • For classification tasks, you can minimize the cross entropy error between the predictions and targets.

  • For regression tasks, you can minimize the mean squared error between the predictions and targets.

You can optimize the objective using gradient descent: minimize the loss L by iteratively updating the learnable parameters θ by taking steps towards the minimum using the gradients of the loss with respect to the learnable parameters. Gradient descent algorithms typically update the learnable parameters by using a variant of an update step of the form θt+1=θt-ρL, where t is the iteration number, ρ is the learning rate, and L denotes the gradients (the derivatives of the loss with respect to the learnable parameters).

This example trains a network to classify handwritten digits with the time-based decay learning rate schedule: for each iteration, the solver uses the learning rate given by ρt=ρ01+kt, where t is the iteration number, ρ0 is the initial learning rate, and k is the decay.

Load Training Data

Load the digits data as an image datastore using the imageDatastore function and specify the folder containing the image data.

unzip("DigitsData.zip")

imds = imageDatastore("DigitsData", ...
    IncludeSubfolders=true, ...
    LabelSource="foldernames");

Partition the data into training and validation sets. Set aside 10% of the data for validation using the splitEachLabel function.

[imdsTrain,imdsValidation] = splitEachLabel(imds,0.9,"randomize");

The network used in this example requires input images of size 28-by-28-by-1. To automatically resize the training images, use an augmented image datastore. Specify additional augmentation operations to perform on the training images: randomly translate the images up to 5 pixels in the horizontal and vertical axes. Data augmentation helps prevent the network from overfitting and memorizing the exact details of the training images.

inputSize = [28 28 1];
pixelRange = [-5 5];

imageAugmenter = imageDataAugmenter( ...
    RandXTranslation=pixelRange, ...
    RandYTranslation=pixelRange);

augimdsTrain = augmentedImageDatastore(inputSize(1:2),imdsTrain,DataAugmentation=imageAugmenter);

To automatically resize the validation images without performing further data augmentation, use an augmented image datastore without specifying any additional preprocessing operations.

augimdsValidation = augmentedImageDatastore(inputSize(1:2),imdsValidation);

Determine the number of classes in the training data.

classes = categories(imdsTrain.Labels);
numClasses = numel(classes);

Define Network

Define the network for image classification.

  • For image input, specify an image input layer with input size matching the training data.

  • Do not normalize the image input, set the Normalization option of the input layer to "none".

  • Specify three convolution-batchnorm-ReLU blocks.

  • Pad the input to the convolution layers such that the output has the same size by setting the Padding option to "same".

  • For the first convolution layer specify 20 filters of size 5. For the remaining convolution layers specify 20 filters of size 3.

  • For classification, specify a fully connected layer with size matching the number of classes

  • To map the output to probabilities, include a softmax layer.

When training a network using a custom training loop, do not include an output layer.

layers = [
    imageInputLayer(inputSize,Normalization="none")
    convolution2dLayer(5,20,Padding="same")
    batchNormalizationLayer
    reluLayer
    convolution2dLayer(3,20,Padding="same")
    batchNormalizationLayer
    reluLayer
    convolution2dLayer(3,20,Padding="same")
    batchNormalizationLayer
    reluLayer
    fullyConnectedLayer(numClasses)
    softmaxLayer];

Create a dlnetwork object from the layer array.

net = dlnetwork(layers)
net = 
  dlnetwork with properties:

         Layers: [12×1 nnet.cnn.layer.Layer]
    Connections: [11×2 table]
     Learnables: [14×3 table]
          State: [6×3 table]
     InputNames: {'imageinput'}
    OutputNames: {'softmax'}
    Initialized: 1

  View summary with summary.

Define Model Loss Function

Training a deep neural network is an optimization task. By considering a neural network as a function f(X;θ), where X is the network input, and θ is the set of learnable parameters, you can optimize θ so that it minimizes some loss value based on the training data. For example, optimize the learnable parameters θ such that for a given inputs X with a corresponding targets T, they minimize the error between the predictions Y=f(X;θ) and T.

Create the function modelLoss, listed in the Model Loss Function section of the example, that takes as input the dlnetwork object, a mini-batch of input data with corresponding targets, and returns the loss, the gradients of the loss with respect to the learnable parameters, and the network state.

Specify Training Options

Train for ten epochs with a mini-batch size of 128.

numEpochs = 10;
miniBatchSize = 128;

Specify the options for SGDM optimization. Specify an initial learn rate of 0.01 with a decay of 0.01, and momentum 0.9.

initialLearnRate = 0.01;
decay = 0.01;
momentum = 0.9;

Train Model

Create a minibatchqueue object that processes and manages mini-batches of images during training. For each mini-batch:

  • Use the custom mini-batch preprocessing function preprocessMiniBatch (defined at the end of this example) to convert the labels to one-hot encoded variables.

  • Format the image data with the dimension labels "SSCB" (spatial, spatial, channel, batch). By default, the minibatchqueue object converts the data to dlarray objects with underlying type single. Do not format the class labels.

  • Discard partial mini-batches.

  • Train on a GPU if one is available. By default, the minibatchqueue object converts each output to a gpuArray if a GPU is available. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox).

mbq = minibatchqueue(augimdsTrain,...
    MiniBatchSize=miniBatchSize,...
    MiniBatchFcn=@preprocessMiniBatch,...
    MiniBatchFormat=["SSCB" ""], ...
    PartialMiniBatch="discard");

Initialize the velocity parameter for the SGDM solver.

velocity = [];

Calculate the total number of iterations for the training progress monitor.

numObservationsTrain = numel(imdsTrain.Files);
numIterationsPerEpoch = floor(numObservationsTrain / miniBatchSize);
numIterations = numEpochs * numIterationsPerEpoch;

Initialize the TrainingProgressMonitor object. Because the timer starts when you create the monitor object, make sure that you create the object close to the training loop.

monitor = trainingProgressMonitor( ...
    Metrics="Loss", ...
    Info=["Epoch" "LearnRate"], ...
    XLabel="Iteration");

Train the network using a custom training loop. For each epoch, shuffle the data and loop over mini-batches of data. For each mini-batch:

  • Evaluate the model loss, gradients, and state using the dlfeval and modelLoss functions and update the network state.

  • Determine the learning rate for the time-based decay learning rate schedule.

  • Update the network parameters using the sgdmupdate function.

  • Update the loss, learn rate, and epoch values in the training progress monitor.

  • Stop if the Stop property is true. The Stop property value of the TrainingProgressMonitor object changes to true when you click the Stop button.

epoch = 0;
iteration = 0;

% Loop over epochs.
while epoch < numEpochs && ~monitor.Stop
    
    epoch = epoch + 1;

    % Shuffle data.
    shuffle(mbq);
    
    % Loop over mini-batches.
    while hasdata(mbq) && ~monitor.Stop

        iteration = iteration + 1;
        
        % Read mini-batch of data.
        [X,T] = next(mbq);
        
        % Evaluate the model gradients, state, and loss using dlfeval and the
        % modelLoss function and update the network state.
        [loss,gradients,state] = dlfeval(@modelLoss,net,X,T);
        net.State = state;
        
        % Determine learning rate for time-based decay learning rate schedule.
        learnRate = initialLearnRate/(1 + decay*iteration);
        
        % Update the network parameters using the SGDM optimizer.
        [net,velocity] = sgdmupdate(net,gradients,velocity,learnRate,momentum);
        
        % Update the training progress monitor.
        recordMetrics(monitor,iteration,Loss=loss);
        updateInfo(monitor,Epoch=epoch,LearnRate=learnRate);
        monitor.Progress = 100 * iteration/numIterations;
    end
end

Test Model

Test the classification accuracy of the model by comparing the predictions on the validation set with the true labels.

After training, making predictions on new data does not require the labels. Create minibatchqueue object containing only the predictors of the test data:

  • To ignore the labels for testing, set the number of outputs of the mini-batch queue to 1.

  • Specify the same mini-batch size used for training.

  • Preprocess the predictors using the preprocessMiniBatchPredictors function, listed at the end of the example.

  • For the single output of the datastore, specify the mini-batch format "SSCB" (spatial, spatial, channel, batch).

numOutputs = 1;

mbqTest = minibatchqueue(augimdsValidation,numOutputs, ...
    MiniBatchSize=miniBatchSize, ...
    MiniBatchFcn=@preprocessMiniBatchPredictors, ...
    MiniBatchFormat="SSCB");

Loop over the mini-batches and classify the images using modelPredictions function, listed at the end of the example.

YTest = modelPredictions(net,mbqTest,classes);

Evaluate the classification accuracy.

TTest = imdsValidation.Labels;
accuracy = mean(TTest == YTest)
accuracy = 0.9220

Visualize the predictions in a confusion chart.

figure
confusionchart(TTest,YTest)

Large values on the diagonal indicate accurate predictions for the corresponding class. Large values on the off-diagonal indicate strong confusion between the corresponding classes.

Supporting Functions

Model Loss Function

The modelLoss function takes a dlnetwork object net, a mini-batch of input data X with corresponding targets T and returns the loss, the gradients of the loss with respect to the learnable parameters in net, and the network state. To compute the gradients automatically, use the dlgradient function.

function [loss,gradients,state] = modelLoss(net,X,T)

% Forward data through network.
[Y,state] = forward(net,X);

% Calculate cross-entropy loss.
loss = crossentropy(Y,T);

% Calculate gradients of loss with respect to learnable parameters.
gradients = dlgradient(loss,net.Learnables);

end

Model Predictions Function

The modelPredictions function takes a dlnetwork object net, a minibatchqueue of input data mbq, and the network classes, and computes the model predictions by iterating over all data in the minibatchqueue object. The function uses the onehotdecode function to find the predicted class with the highest score.

function Y = modelPredictions(net,mbq,classes)

Y = [];

% Loop over mini-batches.
while hasdata(mbq)
    X = next(mbq);

    % Make prediction.
    scores = predict(net,X);

    % Decode labels and append to output.
    labels = onehotdecode(scores,classes,1)';
    Y = [Y; labels];
end

end

Mini Batch Preprocessing Function

The preprocessMiniBatch function preprocesses a mini-batch of predictors and labels using the following steps:

  1. Preprocess the images using the preprocessMiniBatchPredictors function.

  2. Extract the label data from the incoming cell array and concatenate into a categorical array along the second dimension.

  3. One-hot encode the categorical labels into numeric arrays. Encoding into the first dimension produces an encoded array that matches the shape of the network output.

function [X,T] = preprocessMiniBatch(dataX,dataT)

% Preprocess predictors.
X = preprocessMiniBatchPredictors(dataX);

% Extract label data from cell and concatenate.
T = cat(2,dataT{1:end});

% One-hot encode labels.
T = onehotencode(T,1);

end

Mini-Batch Predictors Preprocessing Function

The preprocessMiniBatchPredictors function preprocesses a mini-batch of predictors by extracting the image data from the input cell array and concatenate into a numeric array. For grayscale input, concatenating over the fourth dimension adds a third dimension to each image, to use as a singleton channel dimension.

function X = preprocessMiniBatchPredictors(dataX)

% Concatenate.
X = cat(4,dataX{1:end});

end

Load a pretrained network.

net = squeezenet;

Convert the network to a layer graph, remove the output layer, and convert it to a dlnetwork object.

lgraph = layerGraph(net);
lgraph = removeLayers(lgraph,'ClassificationLayer_predictions');
dlnet = dlnetwork(lgraph);

The Learnables property of the dlnetwork object is a table that contains the learnable parameters of the network. The table includes parameters of nested layers in separate rows. View the first few rows of the learnables table.

learnables = dlnet.Learnables;
head(learnables)
          Layer           Parameter           Value       
    __________________    _________    ___________________

    "conv1"               "Weights"    {3x3x3x64  dlarray}
    "conv1"               "Bias"       {1x1x64    dlarray}
    "fire2-squeeze1x1"    "Weights"    {1x1x64x16 dlarray}
    "fire2-squeeze1x1"    "Bias"       {1x1x16    dlarray}
    "fire2-expand1x1"     "Weights"    {1x1x16x64 dlarray}
    "fire2-expand1x1"     "Bias"       {1x1x64    dlarray}
    "fire2-expand3x3"     "Weights"    {3x3x16x64 dlarray}
    "fire2-expand3x3"     "Bias"       {1x1x64    dlarray}

To freeze the learnable parameters of the network, loop over the learnable parameters and set the learn rate to 0 using the setLearnRateFactor function.

factor = 0;

numLearnables = size(learnables,1);
for i = 1:numLearnables
    layerName = learnables.Layer(i);
    parameterName = learnables.Parameter(i);
    
    dlnet = setLearnRateFactor(dlnet,layerName,parameterName,factor);
end

To use the updated learn rate factors when training, you must pass the dlnetwork object to the update function in the custom training loop. For example, use the command

[dlnet,velocity] = sgdmupdate(dlnet,gradients,velocity);

Create an uninitialized dlnetwork object without an input layer. Creating an uninitialized dlnetwork is useful when you do not yet know the size and format of the network inputs, for example, when the dlnetwork is nested inside a custom layer.

Define the network layers. This network has a single input, which is not connected to an input layer.

layers = [
    convolution2dLayer(5,20)
    batchNormalizationLayer
    reluLayer
    fullyConnectedLayer(10)
    softmaxLayer];

Create an uninitialized dlnetwork. Set the Initialize option to false.

dlnet = dlnetwork(layers,'Initialize',false);

Check that the network is not initialized.

dlnet.Initialized
ans = logical
   0

The learnable and state parameters of this network are not initialized for training. To initialize the network, use the initialize function.

If you want to use dlnet directly in a custom training loop, then you can initialize it by using the initialize function and providing an example input.

If you want to use dlnet inside a custom layer, then you can take advantage of automatic initialization. If you use the custom layer inside a dlnetwork, then dlnet is initialized when the parent dlnetwork is constructed (or when the parent network is initialized if it is constructed as an uninitialized dlnetwork). If you use the custom layer inside a network that is trained using the trainNetwork function, then dlnet is automatically initialized at training time. For more information, see Deep Learning Network Composition.

Extended Capabilities

Version History

Introduced in R2019b

expand all