Import networks and network architectures from TensorFlow™-Keras, Caffe, and the ONNX™ (Open Neural Network Exchange) model format. You can also export a trained Deep Learning Toolbox™ network to the ONNX model format.
You can define your own custom deep learning layer for your problem. You can specify a custom loss function using a custom output layers and define custom layers with or without learnable parameters. For example, you can use a custom weighted classification layer with weighted cross entropy loss for classification problems with an imbalanced distribution of classes. After defining a custom layer, you can check that the layer is valid, GPU compatible, and outputs correctly defined gradients.
trainingOptions function does
not provide the training options that you need for your task, or custom
output layers do not support the loss functions that you need, then you can
define a custom training loop. For networks that cannot be created using
layer graphs, you can define custom networks as a function. To learn more,
see Define Custom Training Loops, Loss Functions, and Networks.
|Import a pretrained Keras network and weights|
|Import layers from Keras network|
|Import pretrained convolutional neural network models from Caffe|
|Import convolutional neural network layers from Caffe|
|Import pretrained ONNX network|
|Import layers from ONNX network|
|Export network to ONNX model format|
|Find placeholder layers in network architecture imported from Keras or ONNX|
|Replace layer in layer graph|
|Assemble deep learning network from pretrained layers|
|Layer replacing an unsupported Keras layer, ONNX layer, or unsupported functionality from functionToLayerGraph|
|Check validity of custom layer|
|Set learn rate factor of layer learnable parameter|
|Set L2 regularization factor of layer learnable parameter|
|Get learn rate factor of layer learnable parameter|
|Get L2 regularization factor of layer learnable parameter|
|Deep learning network for custom training loops|
|Compute deep learning network output for training|
|Compute deep learning network output for inference|
|Update parameters using adaptive moment estimation (Adam)|
|Update parameters using root mean squared propagation (RMSProp)|
|Update parameters using stochastic gradient descent with momentum (SGDM)|
|Update parameters using custom function|
|Deep learning array for custom training loops|
|Compute gradients for custom training loops using automatic differentiation|
|Evaluate deep learning model for custom training loops|
|Dimension labels of dlarray|
|Find dimensions with specified label|
|Remove dlarray labels|
|Extract data from dlarray|
|Convert deep learning model function to a layer graph|
|Deep learning convolution|
|Deep learning transposed convolution|
|Long short-term memory|
|Sum all weighted input data and apply a bias|
|Apply rectified linear unit activation|
|Apply leaky rectified linear unit activation|
|Normalize each channel of input data|
|Pool data to average values over spatial dimensions|
|Pool data to maximum value|
|Unpool the output of a maximum pooling operation|
|Apply softmax activation to channel dimension|
|Categorical cross-entropy loss|
|Apply sigmoid activation|
|Half mean squared error|
Learn how to define custom deep learning layers
Learn how to check the validity of custom deep learning layers.
This example shows how to define a PReLU layer and use it in a convolutional neural network.
This example shows how to define a custom weighted addition layer and use it in a convolutional neural network.
This example shows how to define a custom classification output layer with sum of squares error (SSE) loss and use it in a convolutional neural network.
This example shows how to define and create a custom weighted classification output layer with weighted cross entropy loss.
This example shows how to define a custom regression output layer with mean absolute error (MAE) loss and use it in a convolutional neural network.
This example shows how to train a generative adversarial network (GAN) to generate images.
This example shows how to train a Siamese network to compare handwritten digits using dimensionality reduction.
This example shows how to train a Siamese network to identify similar images of handwritten characters.
Learn how to define and customize deep learning training loops, loss functions, and networks using automatic differentiation.
Learn how to specify common training options in a custom training loop.
This example shows how to train a network that classifies handwritten digits with a custom learning rate schedule.
This example shows how to make predictions using a
dlnetwork object by splitting data into mini-batches.
This example shows how to create and train a deep learning network by using functions rather than a layer graph or a
This example shows how to make predictions using a model function by splitting data into mini-batches.
This example shows how to create a custom He weight initialization function for convolution layers followed by leaky ReLU layers.
This example shows how to import the layers from a pretrained Keras network, replace the unsupported layers with custom layers, and assemble the layers into a network ready for prediction.
Learn how to define and train deep learning networks with multiple inputs or multiple outputs.
This example shows how to train a deep learning network with multiple outputs that predict both labels and angles of rotations of handwritten digits.
Instead of using the model function for prediction, you can assemble the network into a
DAGNetwork ready for prediction using the
Learn how automatic differentiation works.
How to use automatic differentiation in deep learning.
View the list of functions that support
Grad-CAM explains why a network makes a decision.