This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

checkLayer

Check validity of custom layer

Syntax

checkLayer(layer,validInputSize)
checkLayer(layer,validInputSize,'ObservationDimension',dim)

Description

example

checkLayer(layer,validInputSize) checks the validity of a custom layer using generated data of the sizes in validInputSize. For layers with a single input, set validInputSize to a typical size of input data to the layer. For layers with multiple inputs, set validInputSize to a cell array of typical sizes, where each element corresponds to a layer input.

example

checkLayer(layer,validInputSize,'ObservationDimension',dim) specifies the dimension of the data that corresponds to observations. If you specify this parameter, then the function checks the layer for both a single observation and multiple observations.

Examples

collapse all

Check the validity of the example custom layer preluLayer.

Define a custom PReLU layer. To create this layer, save the file preluLayer.m in the current folder.

Create an instance of the layer and check that it is valid using checkLayer. Set the valid input size to the typical size of a single observation input to the layer. For a single input, the layer expects observations of size h-by-w-by-c, where h, w, and c are the height, width, and number of channels of the previous layer output, respectively.

Specify validInputSize as the typical size of an input array.

layer = preluLayer(20,'prelu');
validInputSize = [5 5 20];
checkLayer(layer,validInputSize)
Skipping multi-observation tests. To enable tests with multiple observations, specify the 'ObservationDimension' option in checkLayer.
For 2-D image data, set 'ObservationDimension' to 4.
For 3-D image data, set 'ObservationDimension' to 5.
For sequence data, set 'ObservationDimension' to 2.
 
Skipping GPU tests. No compatible GPU device found.
 
Running nnet.checklayer.TestCase
.......... ...
Done nnet.checklayer.TestCase
__________

Test Summary:
	 13 Passed, 0 Failed, 0 Incomplete, 11 Skipped.
	 Time elapsed: 3.4818 seconds.

The results show the number of passed, failed, and skipped tests. If you do not specify the 'ObservationsDimension' option, or do not have a GPU, then the function skips the corresponding tests.

Check Multiple Observations

For multi-observation input, the layer expects an array of observations of size h-by-w-by-c-by-N, where h, w, and c are the height, width, and number of channels, respectively, and N is the number of observations.

To check the layer validity for multiple observations, specify the typical size of an observation and set 'ObservationDimension' to 4.

layer = preluLayer(20,'prelu');
validInputSize = [5 5 20];
checkLayer(layer,validInputSize,'ObservationDimension',4)
Skipping GPU tests. No compatible GPU device found.
 
Running nnet.checklayer.TestCase
.......... ........
Done nnet.checklayer.TestCase
__________

Test Summary:
	 18 Passed, 0 Failed, 0 Incomplete, 6 Skipped.
	 Time elapsed: 3.1679 seconds.

In this case, the function does not detect any issues with the layer.

Input Arguments

collapse all

Custom layer, specified as an nnet.layer.Layer object, nnet.layer.ClassificationLayer object, or nnet.layer.RegressionLayer object. For an example showing how to define your own custom layer, see Define Custom Deep Learning Layer with Learnable Parameters.

Valid input sizes of the layer, specified as a vector of positive integers or cell array of vectors of positive integers.

  • For layers with a single input, specify validInputSize as a vector of integers corresponding to the dimensions of the input data. For example, [5 5 10] corresponds to valid input data of size 5-by-5-by-10.

  • For layers with multiple inputs, specify validInputSize as a cell array of vectors, where each vector corresponds to a layer input and the elements of the vectors correspond to the dimensions of the corresponding input data. For example, {[24 24 20],[24 24 10]} corresponds to the valid input sizes of two inputs, where 24-by-24-by-20 is a valid input size for the first input and 24-by-24-by-10 is a valid input size for the second input.

For more information, see Layer Input Sizes.

For large input sizes, the gradient checks take longer to run. To speed up the tests, specify a smaller valid input size.

Example: [5 5 10]

Example: {[24 24 20],[24 24 10]}

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | cell

Observation dimension, specified as a positive integer.

The observation dimension specifies which dimension of the layer input data corresponds to observations. For example, if the layer expects input data is of size h-by-w-by-c-by-N, where h, w, and c correspond to the height, width, and number of channels of the input data, respectively, and N corresponds to the number of observations, then the observation dimension is 4. For more information, see Layer Input Sizes.

If you specify the observation dimension, then the checkLayer function checks that the layer functions are valid using generated data with mini-batches of size 1 and 2. If you do not specify the observation dimension, then the function skips the corresponding tests.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

More About

collapse all

Layer Input Sizes

For each layer, the valid input size and the observation dimension depend on the output of the previous layer.

Intermediate Layers

For intermediate layers (layers of type nnet.layer.Layer), the valid input size and the observation dimension depend on the type of data input to the layer. For layers with a single input, specify validInputSize as a vector of integers corresponding to the dimensions of the input data.For layers with multiple inputs, specify validInputSize as a cell array of vectors, where each vector corresponds to a layer input and the elements of the vectors correspond to the dimensions of the corresponding input data. For large input sizes, the gradient checks take longer to run. To speed up the tests, specify a smaller valid input size.

Layer InputInput SizeObservation Dimension
2-D imagesh-by-w-by-c-by-N, where h, w, and c correspond to the height, width, and number of channels of the images respectively, and N is the number of observations.4
3-D imagesh-by-w-by-D-by-c-by-N, where h, w, D, and c correspond to the height, width, depth, and number of channels of the 3-D images respectively, and N is the number of observations.5
Vector sequencesc-by-N-by-S, where c is the number of features of the sequences, N is the number of observations, and S is the sequence length.2
2-D image sequencesh-by-w-by-c-by-N-by-S, where h, w, and c correspond to the height, width, and number of channels of the images respectively, N is the number of observations, and S is the sequence length.4
3-D image sequencesh-by-w-by-d-by-c-by-N-by-S, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3-D images respectively, N is the number of observations, and S is the sequence length.5

For example, for 2-D image classification problems, set validInputSize to [h w c], where h, w, and c correspond to the height, width, and number of channels of the images, respectively, and 'ObservationDimension' to 4.,

Output Layers

For output layers (layers of type nnet.layer.ClassificationLayer or nnet.layer.RegressionLayer), set validInputSize to the typical size of a single input observation Y to the layer.

For classification problems, the valid input size and the observation dimension of Y depend on the type of problem:

Classification TaskInput SizeObservation Dimension
2-D image classification1-by-1-by-K-by-N, where K is the number of classes and N is the number of observations.4
3-D image classification1-by-1-by-1-by-K-by-N, where K is the number of classes and N is the number of observations.5
Sequence-to-label classificationK-by-N, where K is the number of classes and N is the number of observations.2
Sequence-to-sequence classificationK-by-N-by-S, where K is the number of classes, N is the number of observations, and S is the sequence length.2

For example, for 2-D image classification problems, set validInputSize to [1 1 K], where K is the number of classes, and 'ObservationDimension' to 4.

For regression problems, the dimensions of Y also depend on the type of problem. The following table describes the dimensions of Y.

Regression TaskInput SizeObservation Dimension
2-D image regression1-by-1-by-R-by-N, where R is the number of responses and N is the number of observations.4
2-D Image-to-image regressionh-by-w-by-c-by-N, where h, w, and c are the height, width, and number of channels of the output respectively, and N is the number of observations.4
3-D image regression1-by-1-by-1-by-R-by-N, where R is the number of responses and N is the number of observations.5
3-D Image-to-image regressionh-by-w-by-d-by-c-by-N, where h, w, d, and c are the height, width, depth, and number of channels of the output respectively, and N is the number of observations.5
Sequence-to-one regressionR-by-N, where R is the number of responses and N is the number of observations.2
Sequence-to-sequence regressionR-by-N-by-S, where R is the number of responses, N is the number of observations, and S is the sequence length.2

For example, for 2-D image regression problems, set validInputSize to [1 1 R], where R is the number of responses, and 'ObservationDimension' to 4.

Algorithms

collapse all

List of Tests

The checkLayer function checks the validity of a custom layer by performing a series of tests, described in these tables. For more information on the tests used by checkLayer, see Check Custom Layer Validity.

Intermediate Layers

The checkLayer function uses these tests to check the validity of custom intermediate layers (layers of type nnet.layer.Layer).

TestDescription
methodSignaturesAreCorrectThe syntaxes of the layer functions are correctly defined.
predictDoesNotErrorpredict does not error.
forwardDoesNotError

forward does not error.

forwardPredictAreConsistentInSize

forward and predict output values of the same size.

backwardDoesNotErrorbackward does not error.
backwardIsConsistentInSize

The outputs of backward are consistent in size:

  • The derivatives with respect to each input are the same size as the corresponding input.

  • The derivatives with respect to each learnable parameter are the same size as the corresponding learnable parameter.

predictIsConsistentInType

The outputs of predict are consistent in type with the inputs.

forwardIsConsistentInType

The outputs of forward are consistent in type with the inputs.

backwardIsConsistentInType

The outputs of backward are consistent in type with the inputs.

gradientsAreNumericallyCorrectThe gradients computed in backward are consistent with the numerical gradients.

The tests predictIsConsistentInType, forwardIsConsistentInType, and backwardIsConsistentInType also check for GPU compatibility. To execute the layer functions on a GPU, the functions must support inputs and outputs of type gpuArray with the underlying data type single.

If you have not implemented forward, then checkLayer does not run the forwardDoesNotError, forwardPredictAreConsistentInSize, and forwardIsConsistentInType tests.

Output Layers

The checkLayer function uses these tests to check the validity of custom output layers (layers of type nnet.layer.ClassificationLayer or nnet.layer.RegressionLayer).

TestDescription
forwardLossDoesNotErrorforwardLoss does not error.
backwardLossDoesNotErrorbackwardLoss does not error.
forwardLossIsScalarThe output of forwardLoss is scalar.
backwardLossIsConsistentInSizeThe output of backwardLoss is consistent in size: dLdY is the same size as the predictions Y.
forwardLossIsConsistentInType

The output of forwardLoss is consistent in type: loss is the same type as the predictions Y.

backwardLossIsConsistentInType

The output of backwardLoss is consistent in type: dLdY must be the same type as the predictions Y.

gradientsAreNumericallyCorrectThe gradients computed in backwardLoss are numerically correct.

The forwardLossIsConsistentInType and backwardLossIsConsistentInType tests also check for GPU compatibility. To execute the layer functions on a GPU, the functions must support inputs and outputs of type gpuArray with the underlying data type single.

Introduced in R2018a