Main Content

layerNormalizationLayer

Layer normalization layer

    Description

    A layer normalization layer normalizes a mini-batch of data across all channels for each observation independently. To speed up training of recurrent and multilayer perceptron neural networks and reduce the sensitivity to network initialization, use layer normalization layers after the learnable layers, such as LSTM and fully connected layers.

    After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.

    Creation

    Description

    layer = layerNormalizationLayer creates a layer normalization layer.

    example

    layer = layerNormalizationLayer(Name,Value) sets the optional Epsilon, Parameters and Initialization, Learning Rate and Regularization, and Name properties using one or more name-value arguments. For example, layerNormalizationLayer('Name','layernorm') creates a layer normalization layer with name 'layernorm'.

    Properties

    expand all

    Layer Normalization

    Constant to add to the mini-batch variances, specified as a numeric scalar equal to or larger than 1e-5.

    The layer adds this constant to the mini-batch variances before normalization to ensure numerical stability and avoid division by zero.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    This property is read-only.

    Number of input channels, specified as one of the following:

    • 'auto' — Automatically determine the number of input channels at training time.

    • Positive integer — Configure the layer for the specified number of input channels. NumChannels and the number of channels in the layer input data must match. For example, if the input is an RGB image, then NumChannels must be 3. If the input is the output of a convolutional layer with 16 filters, then NumChannels must be 16.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string

    Parameters and Initialization

    Function to initialize the channel scale factors, specified as one of the following:

    • 'ones' – Initialize the channel scale factors with ones.

    • 'zeros' – Initialize the channel scale factors with zeros.

    • 'narrow-normal' – Initialize the channel scale factors by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.

    • Function handle – Initialize the channel scale factors with a custom function. If you specify a function handle, then the function must be of the form scale = func(sz), where sz is the size of the scale. For an example, see Specify Custom Weight Initialization Function.

    The layer only initializes the channel scale factors when the Scale property is empty.

    Data Types: char | string | function_handle

    Function to initialize the channel offsets, specified as one of the following:

    • 'zeros' – Initialize the channel offsets with zeros.

    • 'ones' – Initialize the channel offsets with ones.

    • 'narrow-normal' – Initialize the channel offsets by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.

    • Function handle – Initialize the channel offsets with a custom function. If you specify a function handle, then the function must be of the form offset = func(sz), where sz is the size of the scale. For an example, see Specify Custom Weight Initialization Function.

    The layer only initializes the channel offsets when the Offset property is empty.

    Data Types: char | string | function_handle

    Channel scale factors γ, specified as a numeric array.

    The channel scale factors are learnable parameters. When you train a network, if Scale is nonempty, then trainNetwork uses the Scale property as the initial value. If Scale is empty, then trainNetwork uses the initializer specified by ScaleInitializer.

    At training time, Scale is one of the following:

    • For 2-D image input, a numeric array of size 1-by-1-by-NumChannels

    • For 3-D image input, a numeric array of size 1-by-1-by-1-by-NumChannels

    • For feature or sequence input, a numeric array of size NumChannels-by-1

    Data Types: single | double

    Channel offsets β, specified as a numeric array.

    The channel offsets are learnable parameters. When you train a network, if Offset is nonempty, then trainNetwork uses the Offset property as the initial value. If Offset is empty, then trainNetwork uses the initializer specified by OffsetInitializer.

    At training time, Offset is one of the following:

    • For 2-D image input, a numeric array of size 1-by-1-by-NumChannels

    • For 3-D image input, a numeric array of size 1-by-1-by-1-by-NumChannels

    • For feature or sequence input, a numeric array of size NumChannels-by-1

    Data Types: single | double

    Learning Rate and Regularization

    Learning rate factor for the scale factors, specified as a nonnegative scalar.

    The software multiplies this factor by the global learning rate to determine the learning rate for the scale factors in a layer. For example, if ScaleLearnRateFactor is 2, then the learning rate for the scale factors in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Learning rate factor for the offsets, specified as a nonnegative scalar.

    The software multiplies this factor by the global learning rate to determine the learning rate for the offsets in a layer. For example, if OffsetLearnRateFactor is 2, then the learning rate for the offsets in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    L2 regularization factor for the scale factors, specified as a nonnegative scalar.

    The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the scale factors in a layer. For example, if ScaleL2Factor is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    L2 regularization factor for the offsets, specified as a nonnegative scalar.

    The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the offsets in a layer. For example, if OffsetL2Factor is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Layer

    Layer name, specified as a character vector or a string scalar. For Layer array input, the trainNetwork, assembleNetwork, layerGraph, and dlnetwork functions automatically assign names to layers with the name ''.

    Data Types: char | string

    This property is read-only.

    Number of inputs of the layer. This layer accepts a single input only.

    Data Types: double

    This property is read-only.

    Input names of the layer. This layer accepts a single input only.

    Data Types: cell

    This property is read-only.

    Number of outputs of the layer. This layer has a single output only.

    Data Types: double

    This property is read-only.

    Output names of the layer. This layer has a single output only.

    Data Types: cell

    Examples

    collapse all

    Create a layer normalization layer with the name 'layernorm'.

    layer = layerNormalizationLayer('Name','layernorm')
    layer = 
      LayerNormalizationLayer with properties:
    
               Name: 'layernorm'
        NumChannels: 'auto'
    
       Hyperparameters
            Epsilon: 1.0000e-05
    
       Learnable Parameters
             Offset: []
              Scale: []
    
      Show all properties
    
    

    Include a layer normalization layer in a Layer array.

    layers = [
        imageInputLayer([32 32 3]) 
        convolution2dLayer(3,16,'Padding',1)
        layerNormalizationLayer
        reluLayer   
        maxPooling2dLayer(2,'Stride',2)
        convolution2dLayer(3,32,'Padding',1)
        layerNormalizationLayer
        reluLayer
        fullyConnectedLayer(10)
        softmaxLayer
        classificationLayer]
    layers = 
      11x1 Layer array with layers:
    
         1   ''   Image Input             32x32x3 images with 'zerocenter' normalization
         2   ''   2-D Convolution         16 3x3 convolutions with stride [1  1] and padding [1  1  1  1]
         3   ''   Layer Normalization     Layer normalization
         4   ''   ReLU                    ReLU
         5   ''   2-D Max Pooling         2x2 max pooling with stride [2  2] and padding [0  0  0  0]
         6   ''   2-D Convolution         32 3x3 convolutions with stride [1  1] and padding [1  1  1  1]
         7   ''   Layer Normalization     Layer normalization
         8   ''   ReLU                    ReLU
         9   ''   Fully Connected         10 fully connected layer
        10   ''   Softmax                 softmax
        11   ''   Classification Output   crossentropyex
    

    Algorithms

    The layer normalization operation normalizes the elements xi of the input by first calculating the mean μL and variance σL2 over the spatial, time, and channel dimensions for each observation independently. Then, it calculates the normalized activations as

    xi^=xiμLσL2+ϵ,

    where ϵ is a constant that improves numerical stability when the variance is very small.

    To allow for the possibility that inputs with zero mean and unit variance are not optimal for the operations that follow layer normalization, the layer normalization operation further shifts and scales the activations using the transformation

    yi=γx^i+β,

    where the offset β and scale factor γ are learnable parameters that are updated during network training.

    References

    [1] Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. “Layer Normalization.” Preprint, submitted July 21, 2016. https://arxiv.org/abs/1607.06450.

    Version History

    Introduced in R2021a