# groupNormalizationLayer

Group normalization layer

## Description

A group normalization layer normalizes a mini-batch of data across grouped subsets of channels for each observation independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use group normalization layers between convolutional layers and nonlinearities, such as ReLU layers.

After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.

## Creation

### Description

example

layer = groupNormalizationLayer(numGroups) creates a group normalization layer.

example

layer = groupNormalizationLayer(numGroups,Name,Value) creates a group normalization layer and sets the optional 'Epsilon', Parameters and Initialization, Learning Rate and Regularization, and Name properties using one or more name-value arguments. You can specify multiple name-value arguments. Enclose each property name in quotes.

### Input Arguments

expand all

Number of groups into which to divide the channels of the input data, specified as one of the following:

• Positive integer – Divide the incoming channels into the specified number of groups. The specified number of groups must divide the number of channels of the input data exactly.

• 'all-channels' – Group all incoming channels into a single group. This operation is also known as layer normalization. Alternatively, use layerNormalizationLayer.

• 'channel-wise' – Treat all incoming channels as separate groups. This operation is also known as instance normalization. Alternatively, use instanceNormalizationLayer.

## Properties

expand all

### Group Normalization

Constant to add to the mini-batch variances, specified as a numeric scalar equal to or larger than 1e-5.

The layer adds this constant to the mini-batch variances before normalization to ensure numerical stability and avoid division by zero.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Number of input channels, specified as one of the following:

• 'auto' — Automatically determine the number of input channels at training time.

• Positive integer — Configure the layer for the specified number of input channels. NumChannels and the number of channels in the layer input data must match. For example, if the input is an RGB image, then NumChannels must be 3. If the input is the output of a convolutional layer with 16 filters, then NumChannels must be 16.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string

### Parameters and Initialization

Function to initialize the channel scale factors, specified as one of the following:

• 'ones' – Initialize the channel scale factors with ones.

• 'zeros' – Initialize the channel scale factors with zeros.

• 'narrow-normal' – Initialize the channel scale factors by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.

• Function handle – Initialize the channel scale factors with a custom function. If you specify a function handle, then the function must be of the form scale = func(sz), where sz is the size of the scale. For an example, see Specify Custom Weight Initialization Function.

The layer only initializes the channel scale factors when the Scale property is empty.

Data Types: char | string | function_handle

Function to initialize the channel offsets, specified as one of the following:

• 'zeros' – Initialize the channel offsets with zeros.

• 'ones' – Initialize the channel offsets with ones.

• 'narrow-normal' – Initialize the channel offsets by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.

• Function handle – Initialize the channel offsets with a custom function. If you specify a function handle, then the function must be of the form offset = func(sz), where sz is the size of the scale. For an example, see Specify Custom Weight Initialization Function.

The layer only initializes the channel offsets when the Offset property is empty.

Data Types: char | string | function_handle

Channel scale factors γ, specified as a numeric array.

The channel scale factors are learnable parameters. When you train a network, if Scale is nonempty, then trainNetwork uses the Scale property as the initial value. If Scale is empty, then trainNetwork uses the initializer specified by ScaleInitializer.

At training time, Scale is one of the following:

• For 2-D image input, a numeric array of size 1-by-1-by-NumChannels

• For 3-D image input, a numeric array of size 1-by-1-by-1-by-NumChannels

• For feature or sequence input, a numeric array of size NumChannels-by-1

Data Types: single | double

Channel offsets β, specified as a numeric array.

The channel offsets are learnable parameters. When you train a network, if Offset is nonempty, then trainNetwork uses the Offset property as the initial value. If Offset is empty, then trainNetwork uses the initializer specified by OffsetInitializer.

At training time, Offset is one of the following:

• For 2-D image input, a numeric array of size 1-by-1-by-NumChannels

• For 3-D image input, a numeric array of size 1-by-1-by-1-by-NumChannels

• For feature or sequence input, a numeric array of size NumChannels-by-1

Data Types: single | double

### Learning Rate and Regularization

Learning rate factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the scale factors in a layer. For example, if ScaleLearnRateFactor is 2, then the learning rate for the scale factors in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Learning rate factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the offsets in a layer. For example, if OffsetLearnRateFactor is 2, then the learning rate for the offsets in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

L2 regularization factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the scale factors in a layer. For example, if ScaleL2Factor is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

L2 regularization factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the offsets in a layer. For example, if OffsetL2Factor is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

### Layer

Layer name, specified as a character vector or a string scalar. For Layer array input, the trainNetwork, assembleNetwork, layerGraph, and dlnetwork functions automatically assign names to layers with name ''.

Data Types: char | string

Number of inputs of the layer. This layer accepts a single input only.

Data Types: double

Input names of the layer. This layer accepts a single input only.

Data Types: cell

Number of outputs of the layer. This layer has a single output only.

Data Types: double

Output names of the layer. This layer has a single output only.

Data Types: cell

## Examples

collapse all

Create a group normalization layer that normalizes incoming data across three groups of channels. Name the layer 'groupnorm'.

layer = groupNormalizationLayer(3,'Name','groupnorm')
layer =
GroupNormalizationLayer with properties:

Name: 'groupnorm'
NumChannels: 'auto'

Hyperparameters
NumGroups: 3
Epsilon: 1.0000e-05

Learnable Parameters
Offset: []
Scale: []

Show all properties

Include a group normalization layer in a Layer array. Normalize the incoming 20 channels in four groups.

layers = [
imageInputLayer([28 28 3])
convolution2dLayer(5,20)
groupNormalizationLayer(4)
reluLayer
maxPooling2dLayer(2,'Stride',2)
fullyConnectedLayer(10)
softmaxLayer
classificationLayer]
layers =
8x1 Layer array with layers:

1   ''   Image Input             28x28x3 images with 'zerocenter' normalization
2   ''   Convolution             20 5x5 convolutions with stride [1  1] and padding [0  0  0  0]
3   ''   Group Normalization     Group normalization
4   ''   ReLU                    ReLU
5   ''   Max Pooling             2x2 max pooling with stride [2  2] and padding [0  0  0  0]
6   ''   Fully Connected         10 fully connected layer
7   ''   Softmax                 softmax
8   ''   Classification Output   crossentropyex

expand all

## Algorithms

The group normalization operation normalizes the elements xi of the input by first calculating the mean μG and variance σG2 over spatial, time, and grouped subsets of the channel dimensions for each observation independently. Then, it calculates the normalized activations as

${\stackrel{^}{x}}_{i}=\frac{{x}_{i}-{\mu }_{G}}{\sqrt{{\sigma }_{G}^{2}+\epsilon }},$

where ϵ is a constant that improves numerical stability when the variance is very small. To allow for the possibility that inputs with zero mean and unit variance are not optimal for the operations that follow group normalization, the group normalization operation further shifts and scales the activations using the transformation

${y}_{i}=\gamma {\stackrel{^}{x}}_{i}+\beta ,$

where the offset β and scale factor γ are learnable parameters that are updated during network training.

## References

[1] Wu, Yuxin, and Kaiming He. “Group Normalization.” Preprint submitted June 11, 2018. https://arxiv.org/abs/1803.08494.

## Version History

Introduced in R2020b