Main Content

groupNormalizationLayer

Group normalization layer

Description

A group normalization layer normalizes a mini-batch of data across grouped subsets of channels for each observation independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use group normalization layers between convolutional layers and nonlinearities, such as ReLU layers.

After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.

Creation

Description

example

layer = groupNormalizationLayer(numGroups) creates a group normalization layer.

example

layer = groupNormalizationLayer(numGroups,Name,Value) creates a group normalization layer and sets the optional 'Epsilon', Parameters and Initialization, Learning Rate and Regularization, and Name properties using one or more name-value arguments. You can specify multiple name-value arguments. Enclose each property name in quotes.

Input Arguments

expand all

Number of groups into which to divide the channels of the input data, specified as one of the following:

  • Positive integer – Divide the incoming channels into the specified number of groups. The specified number of groups must divide the number of channels of the input data exactly.

  • 'all-channels' – Group all incoming channels into a single group. This operation is also known as layer normalization. Alternatively, use layerNormalizationLayer.

  • 'channel-wise' – Treat all incoming channels as separate groups. This operation is also known as instance normalization. Alternatively, use instanceNormalizationLayer.

Properties

expand all

Group Normalization

Constant to add to the mini-batch variances, specified as a numeric scalar equal to or larger than 1e-5.

The layer adds this constant to the mini-batch variances before normalization to ensure numerical stability and avoid division by zero.

Number of input channels, specified as 'auto' or a positive integer.

This property is always equal to the number of channels of the input to the layer. If NumChannels is 'auto', then the software automatically determines the correct value for the number of channels at training time.

Parameters and Initialization

Function to initialize the channel scale factors, specified as one of the following:

  • 'ones' – Initialize the channel scale factors with ones.

  • 'zeros' – Initialize the channel scale factors with zeros.

  • 'narrow-normal' – Initialize the channel scale factors by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.

  • Function handle – Initialize the channel scale factors with a custom function. If you specify a function handle, then the function must be of the form scale = func(sz), where sz is the size of the scale. For an example, see Specify Custom Weight Initialization Function.

The layer only initializes the channel scale factors when the Scale property is empty.

Data Types: char | string | function_handle

Function to initialize the channel offsets, specified as one of the following:

  • 'zeros' – Initialize the channel offsets with zeros.

  • 'ones' – Initialize the channel offsets with ones.

  • 'narrow-normal' – Initialize the channel offsets by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.

  • Function handle – Initialize the channel offsets with a custom function. If you specify a function handle, then the function must be of the form offset = func(sz), where sz is the size of the scale. For an example, see Specify Custom Weight Initialization Function.

The layer only initializes the channel offsets when the Offset property is empty.

Data Types: char | string | function_handle

Channel scale factors γ, specified as a numeric array.

The channel scale factors are learnable parameters. When you train a network, if Scale is nonempty, then trainNetwork uses the Scale property as the initial value. If Scale is empty, then trainNetwork uses the initializer specified by ScaleInitializer.

At training time, Scale is one of the following:

  • For 2-D image input, a numeric array of size 1-by-1-by-NumChannels

  • For 3-D image input, a numeric array of size 1-by-1-by-1-by-NumChannels

  • For feature or sequence input, a numeric array of size NumChannels-by-1

Channel offsets β, specified as a numeric array.

The channel offsets are learnable parameters. When you train a network, if Offset is nonempty, then trainNetwork uses the Offset property as the initial value. If Offset is empty, then trainNetwork uses the initializer specified by OffsetInitializer.

At training time, Offset is one of the following:

  • For 2-D image input, a numeric array of size 1-by-1-by-NumChannels

  • For 3-D image input, a numeric array of size 1-by-1-by-1-by-NumChannels

  • For feature or sequence input, a numeric array of size NumChannels-by-1

Learning Rate and Regularization

Learning rate factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the scale factors in a layer. For example, if ScaleLearnRateFactor is 2, then the learning rate for the scale factors in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

Learning rate factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the offsets in a layer. For example, if OffsetLearnRateFactor is 2, then the learning rate for the offsets in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

L2 regularization factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the scale factors in a layer. For example, if ScaleL2Factor is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

L2 regularization factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the offsets in a layer. For example, if OffsetL2Factor is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

Layer

Layer name, specified as a character vector or a string scalar. For Layer array input, the trainNetwork, assembleNetwork, layerGraph, and dlnetwork functions automatically assign names to layers with Name set to ''.

Data Types: char | string

This property is read-only.

Number of inputs of the layer. This layer accepts a single input only.

Data Types: double

This property is read-only.

Input names of the layer. This layer accepts a single input only.

Data Types: cell

This property is read-only.

Number of outputs of the layer. This layer has a single output only.

Data Types: double

This property is read-only.

Output names of the layer. This layer has a single output only.

Data Types: cell

Examples

collapse all

Create a group normalization layer that normalizes incoming data across three groups of channels. Name the layer 'groupnorm'.

layer = groupNormalizationLayer(3,'Name','groupnorm')
layer = 
  GroupNormalizationLayer with properties:

           Name: 'groupnorm'
    NumChannels: 'auto'

   Hyperparameters
      NumGroups: 3
        Epsilon: 1.0000e-05

   Learnable Parameters
         Offset: []
          Scale: []

  Show all properties

Include a group normalization layer in a Layer array. Normalize the incoming 20 channels in four groups.

layers = [
    imageInputLayer([28 28 3])
    convolution2dLayer(5,20)
    groupNormalizationLayer(4)
    reluLayer
    maxPooling2dLayer(2,'Stride',2)
    fullyConnectedLayer(10)
    softmaxLayer
    classificationLayer]
layers = 
  8x1 Layer array with layers:

     1   ''   Image Input             28x28x3 images with 'zerocenter' normalization
     2   ''   Convolution             20 5x5 convolutions with stride [1  1] and padding [0  0  0  0]
     3   ''   Group Normalization     Group normalization
     4   ''   ReLU                    ReLU
     5   ''   Max Pooling             2x2 max pooling with stride [2  2] and padding [0  0  0  0]
     6   ''   Fully Connected         10 fully connected layer
     7   ''   Softmax                 softmax
     8   ''   Classification Output   crossentropyex

More About

expand all

Algorithms

The group normalization operation normalizes the elements xi of the input by first calculating the mean μG and variance σG2 over spatial, time, and grouped subsets of the channel dimensions for each observation independently. Then, it calculates the normalized activations as

x^i=xiμGσG2+ε,

where ϵ is a constant that improves numerical stability when the variance is very small. To allow for the possibility that inputs with zero mean and unit variance are not optimal for the operations that follow group normalization, the group normalization operation further shifts and scales the activations using the transformation

yi=γx^i+β,

where the offset β and scale factor γ are learnable parameters that are updated during network training.

References

[1] Wu, Yuxin, and Kaiming He. “Group Normalization.” Preprint submitted June 11, 2018. https://arxiv.org/abs/1803.08494.

Extended Capabilities

C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.

GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.

Introduced in R2020b