This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.


Clipped Rectified Linear Unit (ReLU) layer


A clipped ReLU layer performs a threshold operation, where any input value less than zero is set to zero and any value above the clipping ceiling is set to that clipping ceiling.

This operation is equivalent to:


This clipping prevents the output from becoming too large.



layer = clippedReluLayer(ceiling)
layer = clippedReluLayer(ceiling,'Name',Name)


layer = clippedReluLayer(ceiling) returns a clipped ReLU layer with the clipping ceiling equal to ceiling.


layer = clippedReluLayer(ceiling,'Name',Name) sets the optional Name property.


expand all

Clipped ReLU

Ceiling for input clipping, specified as a positive scalar.

Example: 10


Layer name, specified as a character vector or a string scalar. To include a layer in a layer graph, you must specify a nonempty unique layer name. If you train a series network with the layer and Name is set to '', then the software automatically assigns a name to the layer at training time.

Data Types: char | string

Number of inputs of the layer. This layer accepts a single input only.

Data Types: double

Input names of the layer. This layer accepts a single input only.

Data Types: cell

Number of outputs of the layer. This layer has a single output only.

Data Types: double

Output names of the layer. This layer has a single output only.

Data Types: cell


collapse all

Create a clipped ReLU layer with the name 'clip1' and the clipping ceiling equal to 10.

layer = clippedReluLayer(10,'Name','clip1')
layer = 
  ClippedReLULayer with properties:

       Name: 'clip1'

    Ceiling: 10

Include a clipped ReLU layer in a Layer array.

layers = [ ...
    imageInputLayer([28 28 1])
layers = 
  7x1 Layer array with layers:

     1   ''   Image Input             28x28x1 images with 'zerocenter' normalization
     2   ''   Convolution             20 5x5 convolutions with stride [1  1] and padding [0  0  0  0]
     3   ''   Clipped ReLU            Clipped ReLU with ceiling 10
     4   ''   Max Pooling             2x2 max pooling with stride [2  2] and padding [0  0  0  0]
     5   ''   Fully Connected         10 fully connected layer
     6   ''   Softmax                 softmax
     7   ''   Classification Output   crossentropyex


[1] Hannun, Awni, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, et al. "Deep speech: Scaling up end-to-end speech recognition." Preprint, submitted 17 Dec 2014.

Introduced in R2017b