For most tasks, you can use built-in layers. If there is not a built-in layer that you need for your task, then you can define your own custom layer. You can specify a custom loss function using a custom output layer and define custom layers with learnable and state parameters. After defining a custom layer, you can check that the layer is valid, GPU compatible, and outputs correctly defined gradients. To learn more, see Define Custom Deep Learning Layers. For a list of supported layers, see List of Deep Learning Layers.
trainingOptions function does not provide the training options that you need for your task, or custom output layers do not support the loss functions that you need, then you can define a custom training loop. For models that layer graphs do not support, you can define a custom model as a function. To learn more, see Define Custom Training Loops, Loss Functions, and Networks.
Use deep learning operations to develop MATLAB® code for custom layers, training loops, and model functions.
|Deep learning array for customization (Da R2019b)|
|Etichette della dimensione di |
|Find dimensions with specified label (Da R2019b)|
|Estrae i dati da |
|Check if object is |
Deep Learning Operations
|Deep learning convolution (Da R2019b)|
|Deep learning transposed convolution (Da R2019b)|
|Long short-term memory (Da R2019b)|
|Gated recurrent unit (Da R2020a)|
|Dot-product attention (Da R2022b)|
|Embed discrete data (Da R2020b)|
|Sum all weighted input data and apply a bias (Da R2019b)|
|Deep learning solution of nonstiff ordinary differential equation (ODE) (Da R2021b)|
|Normalize data across all observations for each channel independently (Da R2019b)|
|Cross channel square-normalize using local responses (Da R2020a)|
|Normalize data across grouped subsets of channels for each observation independently (Da R2020b)|
|Normalize across each channel for each observation independently (Da R2021a)|
|Normalize data across all channels for each observation independently (Da R2021a)|
|Pool data to average values over spatial dimensions (Da R2019b)|
|Pool data to maximum value (Da R2019b)|
|Unpool the output of a maximum pooling operation (Da R2019b)|
|Applicare l'attivazione dell'unità lineare rettificata (Da R2019b)|
|Apply leaky rectified linear unit activation (Da R2019b)|
|Apply Gaussian error linear unit (GELU) activation (Da R2022b)|
|Apply softmax activation to channel dimension (Da R2019b)|
|Applica l’attivazione sigmoidea (Da R2019b)|
|Cross-entropy loss for classification tasks (Da R2019b)|
|L1 loss for regression tasks (Da R2021b)|
|L2 loss for regression tasks (Da R2021b)|
|Huber loss for regression tasks (Da R2021a)|
|Half mean squared error (Da R2019b)|
|Connectionist temporal classification (CTC) loss for unaligned sequence classification (Da R2021a)|
- List of Functions with dlarray Support
View the list of functions that support
- Automatic Differentiation Background
Learn how automatic differentiation works.
- Use Automatic Differentiation In Deep Learning Toolbox
How to use automatic differentiation in deep learning.
- Train Network Using Model Function
This example shows how to create and train a deep learning network by using functions rather than a layer graph or a
- Update Batch Normalization Statistics Using Model Function
This example shows how to update the network state in a network defined as a function.
- Make Predictions Using Model Function
This example shows how to make predictions using a model function by splitting data into mini-batches.
- Initialize Learnable Parameters for Model Function
Learn how to initialize learnable parameters for custom training loops using a model function.
Deep Learning Function Acceleration
- Deep Learning Function Acceleration for Custom Training Loops
Accelerate model functions and model loss functions for custom training loops by caching and reusing traces.
- Accelerate Custom Training Loop Functions
This example shows how to accelerate deep learning custom training loop and prediction functions.
- Check Accelerated Deep Learning Function Outputs
This example shows how to check that the outputs of accelerated functions match the outputs of the underlying function.
- Evaluate Performance of Accelerated Deep Learning Function
This example shows how to evaluate the performance gains of using an accelerated function.