Main Content

Custom Training Loops

Customize deep learning training loops and loss functions for sequence and tabular data

If the trainingOptions function does not provide the training options that you need for your task, or custom output layers do not support the loss functions that you need, then you can define a custom training loop. For models that cannot be specified as networks of layers, you can define the model as a function. To learn more, see Define Custom Training Loops, Loss Functions, and Networks.

Funzioni

espandi tutto

dlnetworkDeep learning neural network (Da R2019b)
trainingProgressMonitorMonitor and plot training progress for deep learning custom training loops (Da R2022b)
minibatchqueueCreate mini-batches for deep learning (Da R2020b)
padsequencesPad or truncate sequence data to same length (Da R2021a)
dlarrayDeep learning array for customization (Da R2019b)
dlgradientCompute gradients for custom training loops using automatic differentiation (Da R2019b)
dlfevalEvaluate deep learning model for custom training loops (Da R2019b)
crossentropyCross-entropy loss for classification tasks (Da R2019b)
l1lossL1 loss for regression tasks (Da R2021b)
l2lossL2 loss for regression tasks (Da R2021b)
huberHuber loss for regression tasks (Da R2021a)
mseHalf mean squared error (Da R2019b)
ctcConnectionist temporal classification (CTC) loss for unaligned sequence classification (Da R2021a)
dlconvDeep learning convolution (Da R2019b)
dltranspconvDeep learning transposed convolution (Da R2019b)
lstmLong short-term memory (Da R2019b)
gruGated recurrent unit (Da R2020a)
attentionDot-product attention (Da R2022b)
embedEmbed discrete data (Da R2020b)
fullyconnectSum all weighted input data and apply a bias (Da R2019b)
dlode45Deep learning solution of nonstiff ordinary differential equation (ODE) (Da R2021b)
batchnormNormalize data across all observations for each channel independently (Da R2019b)
crosschannelnormCross channel square-normalize using local responses (Da R2020a)
groupnormNormalize data across grouped subsets of channels for each observation independently (Da R2020b)
instancenormNormalize across each channel for each observation independently (Da R2021a)
layernormNormalize data across all channels for each observation independently (Da R2021a)
avgpoolPool data to average values over spatial dimensions (Da R2019b)
maxpoolPool data to maximum value (Da R2019b)
maxunpoolUnpool the output of a maximum pooling operation (Da R2019b)
reluApplicare l'attivazione dell'unità lineare rettificata (Da R2019b)
leakyreluApply leaky rectified linear unit activation (Da R2019b)
geluApply Gaussian error linear unit (GELU) activation (Da R2022b)
softmaxApply softmax activation to channel dimension (Da R2019b)
sigmoidApplica l’attivazione sigmoidea (Da R2019b)

Argomenti

Custom Training Loops

Automatic Differentiation