# checkLayer

Check validity of custom or function layer

## Syntax

``checkLayer(layer,validInputSize)``
``checkLayer(layer,validInputSize,Name=Value)``

## Description

example

````checkLayer(layer,validInputSize)` checks the validity of a custom or function layer using generated data of the sizes in `validInputSize`. For layers with a single input, set `validInputSize` to a typical size of input data to the layer. For layers with multiple inputs, set `validInputSize` to a cell array of typical sizes, where each element corresponds to a layer input.```

example

````checkLayer(layer,validInputSize,Name=Value)` specifies additional options using one or more name-value arguments.```

## Examples

collapse all

Check the validity of the example custom layer `preluLayer`.

The custom layer `preluLayer`, attached to this example as a supporting file, applies the PReLU operation to the input data. To access this layer, open this example as a live script.

Create an instance of the layer.

`layer = preluLayer;`

Because the layer has a custom initialize function, initialize the layer using a `networkDataFormat` object that specifies the expected input size and format of a single observation of typical input to the layer.

Specify a valid input size of `[24 24 20]`, where the dimensions correspond to the height, width, and number of channels of the previous layer output.

```validInputSize = [24 24 20]; layout = networkDataLayout(validInputSize,"SSC"); layer = initialize(layer,layout);```

Check the layer validity using `checkLayer`. Specify the valid input size as the size as the size as used to initialize the layer. When you pass data through the network, the layer expects 4-D array inputs, where the first three dimensions correspond to the height, width, and number of channels of the previous layer output, and the fourth dimension corresponds to the observations.

`checkLayer(layer,validInputSize)`
```Skipping multi-observation tests. To enable tests with multiple observations, specify the 'ObservationDimension' option. For 2-D image data, set 'ObservationDimension' to 4. For 3-D image data, set 'ObservationDimension' to 5. For sequence data, set 'ObservationDimension' to 2. Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... .. Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 12 Passed, 0 Failed, 0 Incomplete, 16 Skipped. Time elapsed: 0.054851 seconds. ```

The results show the number of passed, failed, and skipped tests. If you do not specify the `ObservationsDimension` option, or do not have a GPU, then the function skips the corresponding tests.

Check Multiple Observations

For multi-observation image input, the layer expects an array of observations of size h-by-w-by-c-by-N, where h, w, and c are the height, width, and number of channels, respectively, and N is the number of observations.

To check the layer validity for multiple observations, specify the typical size of an observation and set the `ObservationDimension` option to 4.

`checkLayer(layer,validInputSize,ObservationDimension=4)`
```Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... ........ Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 18 Passed, 0 Failed, 0 Incomplete, 10 Skipped. Time elapsed: 0.030498 seconds. ```

In this case, the function does not detect any issues with the layer.

Create a function layer object that applies the softsign operation to the input. The softsign operation is given by the function $\mathit{f}\left(\mathit{x}\right)=\frac{\mathit{x}}{1+|\mathit{x}|}$.

`layer = functionLayer(@(X) X./(1 + abs(X)))`
```layer = FunctionLayer with properties: Name: '' PredictFcn: @(X)X./(1+abs(X)) Formattable: 0 Acceleratable: 0 Learnable Parameters No properties. State Parameters No properties. Show all properties ```

Check that the layer it is valid using the `checkLayer` function. Set the valid input size to the typical size of a single observation input to the layer. For example, for a single input, the layer expects observations of size h-by-w-by-c, where h, w, and c are the height, width, and number of channels of the previous layer output, respectively.

Specify `validInputSize` as the typical size of an input array.

```validInputSize = [5 5 20]; checkLayer(layer,validInputSize)```
```Skipping multi-observation tests. To enable tests with multiple observations, specify the 'ObservationDimension' option. For 2-D image data, set 'ObservationDimension' to 4. For 3-D image data, set 'ObservationDimension' to 5. For sequence data, set 'ObservationDimension' to 2. Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... .. Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 12 Passed, 0 Failed, 0 Incomplete, 16 Skipped. Time elapsed: 0.28075 seconds. ```

The results show the number of passed, failed, and skipped tests. If you do not specify the `ObservationsDimension` option, or do not have a GPU, then the function skips the corresponding tests.

Check Multiple Observations

For multi-observation image input, the layer expects an array of observations of size h-by-w-by-c-by-N, where h, w, and c are the height, width, and number of channels, respectively, and N is the number of observations.

To check the layer validity for multiple observations, specify the typical size of an observation and set the `ObservationDimension` option to 4.

```layer = functionLayer(@(X) X./(1 + abs(X))); validInputSize = [5 5 20]; checkLayer(layer,validInputSize,ObservationDimension=4)```
```Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... ........ Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 18 Passed, 0 Failed, 0 Incomplete, 10 Skipped. Time elapsed: 0.16798 seconds. ```

In this case, the function does not detect any issues with the layer.

Check the code generation compatibility of the custom layer `codegenPreluLayer`.

The custom layer `codegenPreluLayer`, attached to this is example as a supporting file, applies the PReLU operation to the input data. To access this layer, open this example as a live script.

Create an instance of the layer and check its validity using `checkLayer`. Specify the valid input size as the size of a single observation of typical input to the layer. The layer expects 4-D array inputs, where the first three dimensions correspond to the height, width, and number of channels of the previous layer output, and the fourth dimension corresponds to the observations.

Specify the typical size of the input of an observation and set the `'ObservationDimension'` option to 4. To check for code generation compatibility, set the `CheckCodegenCompatibility` option to `true`. The `checkLayer` function does not check for functions that are not compatible with code generation. To check that the custom layer definition is supported for code generation, first use the Code Generation Readiness app. For more information, see Check Code by Using the Code Generation Readiness Tool (MATLAB Coder).

```layer = codegenPreluLayer(20,"prelu"); validInputSize = [24 24 20]; checkLayer(layer,validInputSize,ObservationDimension=4,CheckCodegenCompatibility=true)```
```Skipping GPU tests. No compatible GPU device found. Running nnet.checklayer.TestLayerWithoutBackward .......... .......... ... Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 23 Passed, 0 Failed, 0 Incomplete, 5 Skipped. Time elapsed: 0.83484 seconds. ```

The function does not detect any issues with the layer.

## Input Arguments

collapse all

Layer to check, specified as an `nnet.layer.Layer`, `nnet.layer.ClassificationLayer`, `nnet.layer.RegressionLayer`, or `FunctionLayer` object.

If `layer` has learnable or state parameters, then the layer must be initialized. If the layer has a custom `initialize` function, then first initialize the layer using the initialize function using `networkDataLayout` objects.

The `checkLayer` function does not support layers that inherit from `nnet.layer.Formattable`.

For an example showing how to define your own custom layer, see Define Custom Deep Learning Layer with Learnable Parameters. To create a layer that applies a specified function, use `functionLayer`.

Valid input sizes of the layer, specified as a vector of positive integers or cell array of vectors of positive integers.

• For layers with a single input, specify `validInputSize` as a vector of integers corresponding to the dimensions of the input data. For example, `[5 5 10]` corresponds to valid input data of size 5-by-5-by-10.

• For layers with multiple inputs, specify `validInputSize` as a cell array of vectors, where each vector corresponds to a layer input and the elements of the vectors correspond to the dimensions of the corresponding input data. For example, ```{[24 24 20],[24 24 10]}``` corresponds to the valid input sizes of two inputs, where 24-by-24-by-20 is a valid input size for the first input and 24-by-24-by-10 is a valid input size for the second input.

For large input sizes, the gradient checks take longer to run. To speed up the check, specify a smaller valid input size.

Example: `[5 5 10]`

Example: `{[24 24 20],[24 24 10]}`

Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `cell`

### Name-Value Arguments

Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: `ObservationDimension=4` sets the observation dimension to 4

Observation dimension, specified as a positive integer.

The observation dimension specifies which dimension of the layer input data corresponds to observations. For example, if the layer expects input data is of size h-by-w-by-c-by-N, where h, w, and c correspond to the height, width, and number of channels of the input data, respectively, and N corresponds to the number of observations, then the observation dimension is 4. For more information, see Layer Input Sizes.

If you specify the observation dimension, then the `checkLayer` function checks that the layer functions are valid using generated data with mini-batches of size 1 and 2. If you do not specify the observation dimension, then the function skips the corresponding tests.

Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`

Flag to enable code generation tests, specified as `0` (false) or `1` (true).

If `CheckCodegenCompatibility` is `1` (true), then you must specify the `ObservationDimension` option.

Code generation supports intermediate layers with 2-D image or feature input only. Code generation does not support layers with state properties (properties with attribute `State`).

The `checkLayer` function does not check that functions used by the layer are compatible with code generation. To check that functions used by the custom layer also support code generation, first use the Code Generation Readiness app. For more information, see Check Code by Using the Code Generation Readiness Tool (MATLAB Coder).

For an example showing how to define a custom layer that supports code generation, see Define Custom Deep Learning Layer for Code Generation.

Data Types: `logical`

collapse all

### Layer Input Sizes

For each layer, the valid input size and the observation dimension depend on the output of the previous layer.

Intermediate Layers

For intermediate layers (layers of type `nnet.layer.Layer`), the valid input size and the observation dimension depend on the type of data input to the layer.

• For layers with a single input, specify `validInputSize` as a vector of integers corresponding to the dimensions of the input data.

• For layers with multiple inputs, specify `validInputSize` as a cell array of vectors, where each vector corresponds to a layer input and the elements of the vectors correspond to the dimensions of the corresponding input data.

For large input sizes, the gradient checks take longer to run. To speed up the check, specify a smaller valid input size.

Layer InputInput SizeObservation Dimension
Feature vectorsc-by-N, where c corresponds to the number of channels and N is the number of observations2
2-D imagesh-by-w-by-c-by-N, where h, w, and c correspond to the height, width, and number of channels of the images, respectively, and N is the number of observations4
3-D imagesh-by-w-by-d-by-c-by-N, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3-D images, respectively, and N is the number of observations5
Vector sequencesc-by-N-by-S, where c is the number of features of the sequences, N is the number of observations, and S is the sequence length2
2-D image sequencesh-by-w-by-c-by-N-by-S, where h, w, and c correspond to the height, width, and number of channels of the images, respectively, N is the number of observations, and S is the sequence length4
3-D image sequencesh-by-w-by-d-by-c-by-N-by-S, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3-D images, respectively, N is the number of observations, and S is the sequence length5

For example, for 2-D image classification problems, set `validInputSize` to `[h w c]`, where `h`, `w`, and `c` correspond to the height, width, and number of channels of the images, respectively, and `ObservationDimension` to `4`.

Code generation supports intermediate layers with 2-D image input only.

Output Layers

For output layers (layers of type `nnet.layer.ClassificationLayer` or `nnet.layer.RegressionLayer`), set `validInputSize` to the typical size of a single input observation `Y` to the layer.

For classification problems, the valid input size and the observation dimension of `Y` depend on the type of problem:

2-D image classification1-by-1-by-K-by-N, where K is the number of classes and N is the number of observations4
3-D image classification1-by-1-by-1-by-K-by-N, where K is the number of classes and N is the number of observations5
Sequence-to-label classificationK-by-N, where K is the number of classes and N is the number of observations2
Sequence-to-sequence classificationK-by-N-by-S, where K is the number of classes, N is the number of observations, and S is the sequence length2

For example, for 2-D image classification problems, set `validInputSize` to `[1 1 K]`, where `K` is the number of classes, and `ObservationDimension` to `4`.

For regression problems, the dimensions of `Y` also depend on the type of problem. The following table describes the dimensions of `Y`.

2-D image regression1-by-1-by-R-by-N, where R is the number of responses and N is the number of observations4
2-D Image-to-image regressionh-by-w-by-c-by-N, where h, w, and c are the height, width, and number of channels of the output, respectively, and N is the number of observations4
3-D image regression1-by-1-by-1-by-R-by-N, where R is the number of responses and N is the number of observations5
3-D Image-to-image regressionh-by-w-by-d-by-c-by-N, where h, w, d, and c are the height, width, depth, and number of channels of the output, respectively, and N is the number of observations5
Sequence-to-one regressionR-by-N, where R is the number of responses and N is the number of observations2
Sequence-to-sequence regressionR-by-N-by-S, where R is the number of responses, N is the number of observations, and S is the sequence length2

For example, for 2-D image regression problems, set `validInputSize` to `[1 1 R]`, where `R` is the number of responses, and `ObservationDimension` to `4`.

## Algorithms

collapse all

### List of Tests

The `checkLayer` function checks the validity of a custom layer by performing a series of tests, described in these tables. For more information on the tests used by `checkLayer`, see Check Custom Layer Validity.

Intermediate Layers

The `checkLayer` function uses these tests to check the validity of custom intermediate layers (layers of type `nnet.layer.Layer`).

TestDescription
`functionSyntaxesAreCorrect`The syntaxes of the layer functions are correctly defined.
`predictDoesNotError``predict` function does not error.
`forwardDoesNotError`

When specified, the `forward` function does not error.

`forwardPredictAreConsistentInSize`

When `forward` is specified, `forward` and `predict` output values of the same size.

`backwardDoesNotError`When specified, `backward` does not error.
`backwardIsConsistentInSize`

When `backward` is specified, the outputs of `backward` are consistent in size:

• The derivatives with respect to each input are the same size as the corresponding input.

• The derivatives with respect to each learnable parameter are the same size as the corresponding learnable parameter.

`predictIsConsistentInType`

The outputs of `predict` are consistent in type with the inputs.

`forwardIsConsistentInType`

When `forward` is specified, the outputs of `forward` are consistent in type with the inputs.

`backwardIsConsistentInType`

When `backward` is specified, the outputs of `backward` are consistent in type with the inputs.

`gradientsAreNumericallyCorrect`When `backward` is specified, the gradients computed in `backward` are consistent with the numerical gradients.
`backwardPropagationDoesNotError`When `backward` is not specified, the derivatives can be computed using automatic differentiation.
`predictReturnsValidStates`For layers with state properties, the `predict` function returns valid states.
`forwardReturnsValidStates`For layers with state properties, the `forward` function, if specified, returns valid states.
`resetStateDoesNotError`For layers with state properties, the `resetState` function, if specified, does not error and resets the states to valid states.
`codegenPragmaDefinedInClassDef`The pragma `"%#codegen"` for code generation is specified in class file.
`layerPropertiesSupportCodegen`The layer properties support code generation.
`predictSupportsCodegen``predict` is valid for code generation.
`doesNotHaveStateProperties`For code generation, the layer does not have state properties.
`functionLayerSupportsCodegen`For code generation, the layer function must be a named function on the path and the `Formattable` property must be `0` (false).

Some tests run multiple times. These tests also check different data types and for GPU compatibility:

• `predictIsConsistentInType`

• `forwardIsConsistentInType`

• `backwardIsConsistentInType`

To execute the layer functions on a GPU, the functions must support inputs and outputs of type `gpuArray` with the underlying data type `single`.

Output Layers

The `checkLayer` function uses these tests to check the validity of custom output layers (layers of type `nnet.layer.ClassificationLayer` or `nnet.layer.RegressionLayer`).

TestDescription
`forwardLossDoesNotError``forwardLoss` does not error.
`backwardLossDoesNotError``backwardLoss` does not error.
`forwardLossIsScalar`The output of `forwardLoss` is scalar.
`backwardLossIsConsistentInSize`When `backwardLoss` is specified, the output of `backwardLoss` is consistent in size: `dLdY` is the same size as the predictions `Y`.
`forwardLossIsConsistentInType`

The output of `forwardLoss` is consistent in type: `loss` is the same type as the predictions `Y`.

`backwardLossIsConsistentInType`

When `backwardLoss` is specified, the output of `backwardLoss` is consistent in type: `dLdY` must be the same type as the predictions `Y`.

`gradientsAreNumericallyCorrect`When `backwardLoss` is specified, the gradients computed in `backwardLoss` are numerically correct.
`backwardPropagationDoesNotError`When `backwardLoss` is not specified, the derivatives can be computed using automatic differentiation.

The `forwardLossIsConsistentInType` and `backwardLossIsConsistentInType` tests also check for GPU compatibility. To execute the layer functions on a GPU, the functions must support inputs and outputs of type `gpuArray` with the underlying data type `single`.

## Version History

Introduced in R2018a