This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

image3dInputLayer

3-D image input layer

Description

A 3-D image input layer inputs 3-D images or volumes to a network and applies data normalization.

For 2-D image input, use imageInputLayer.

Creation

Syntax

layer = image3dInputLayer(inputSize)
layer = image3dInputLayer(inputSize,Name,Value)

Description

layer = image3dInputLayer(inputSize) returns a 3-D image input layer and specifies the InputSize property.

example

layer = image3dInputLayer(inputSize,Name,Value) sets the optional properties using name-value pairs. You can specify multiple name-value pairs. Enclose each property name in single quotes.

Properties

expand all

3-D Image Input

Size of the input data, specified as a row vector of integers [h w d c], where h, w, d, and c correspond to the height, width, depth, and number of channels respectively.

  • For grayscale input, specify a vector with c equal to 1.

  • For RGB input, specify a vector with c equal to 3.

  • For multispectral or hyperspectral input, specify a vector with c equal to the number of channels.

For 2-D image input, use imageInputLayer.

Example: [132 132 116 3]

Data transformation to apply every time data is forward propagated through the input layer, specified as one of the following.

  • 'zerocenter' — Subtract the average image specified by the AverageImage property. The trainNetwork function automatically computes the average image at training time.

  • 'none' — Do not transform the input data.

Average image used for zero center normalization, specified as a h-by-w-by-d-by-c array, a 1-by-1-by-1-by-c array of means per channel, or [], where h, w, d, and c correspond to the height, width, depth, and the number of channels of the average image respectively.

You can set this property when creating networks without training (for example, when assembling networks using assembleNetwork). Otherwise, the trainNetwork function recomputes the average image at training time. When specifying the average image, the Normalization property must be 'zerocenter'.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Layer

Layer name, specified as a character vector or a string scalar. To include a layer in a layer graph, you must specify a nonempty unique layer name. If you train a series network with the layer and Name is set to '', then the software automatically assigns a name to the layer at training time.

Data Types: char | string

Number of inputs of the layer. The layer has no inputs.

Data Types: double

Input names of the layer. The layer has no inputs.

Data Types: cell

Number of outputs of the layer. This layer has a single output only.

Data Types: double

Output names of the layer. This layer has a single output only.

Data Types: cell

Examples

collapse all

Create a 3-D image input layer for 132-by-132-by-116 color 3-D images with name 'input'. By default, the layer performs data normalization by subtracting the mean image of the training set from every input image.

layer = image3dInputLayer([132 132 116],'Name','input')
layer = 
  Image3DInputLayer with properties:

             Name: 'input'
        InputSize: [132 132 116 1]

   Hyperparameters
    Normalization: 'zerocenter'
     AverageImage: []

Include a 3-D image input layer in a Layer array.

layers = [
    image3dInputLayer([28 28 28 3])
    convolution3dLayer(5,16,'Stride',4)
    reluLayer
    maxPooling3dLayer(2,'Stride',4)
    fullyConnectedLayer(10)
    softmaxLayer
    classificationLayer]
layers = 
  7x1 Layer array with layers:

     1   ''   3-D Image Input         28x28x28x3 images with 'zerocenter' normalization
     2   ''   Convolution             16 5x5x5 convolutions with stride [4  4  4] and padding [0  0  0; 0  0  0]
     3   ''   ReLU                    ReLU
     4   ''   3-D Max Pooling         2x2x2 max pooling with stride [4  4  4] and padding [0  0  0; 0  0  0]
     5   ''   Fully Connected         10 fully connected layer
     6   ''   Softmax                 softmax
     7   ''   Classification Output   crossentropyex

Introduced in R2019a