Main Content

ProjectedLayer

Compressed neural network layer using projection

Since R2023b

    Description

    A projected layer is a compressed neural network layer resulting from projection.

    Creation

    To compress a neural network using projection, use the compressNetworkUsingProjection function. This feature requires the Deep Learning Toolbox™ Model Quantization Library support package. This support package is a free add-on that you can download using the Add-On Explorer. Alternatively, see Deep Learning Toolbox Model Quantization Library.

    Properties

    expand all

    Projected

    This property is read-only.

    Class of the original layer, returned as a character vector.

    Example: 'nnet.cnn.layer.LSTMLayer'

    Data Types: char

    This property is read-only.

    Proportion of learnables removed in the layer, returned as a scalar in the interval [0,1].

    Data Types: double

    Neural network that represents projection, returned as a dlnetwork object.

    The neural network that represents the projection depends on the type of layer:

    LayerNetwork
    convolution2dLayerNetwork containing three convolution2dLayer objects
    fullyConnectedLayerNetwork containing two fullyConnectedLayer objects
    lstmLayerNetwork containing a single lstmProjectedLayer object
    gruLayerNetwork containing a single gruProjectedLayer object

    To replace the ProjectedLayer objects in a neural network with the equivalent network that represents the projection, use the unpackProjectedLayers function.

    This property is read-only.

    Number of input channels, returned as a positive integer.

    Data Types: double

    This property is read-only.

    Number of output channels, returned as a positive integer.

    Data Types: double

    This property is read-only.

    Number of columns of the input projector, returned as a positive integer.

    The input projector is the matrix Q used to project the layer input. For more information, see Projected Layer.

    Data Types: double

    This property is read-only.

    Number of columns of the output projector, returned as a positive integer.

    The output projector is the matrix Q used to project the layer output. For more information, see Projected Layer.

    Data Types: double

    Layer

    Layer name, specified as a character vector or a string scalar. For Layer array input, the trainnet and dlnetwork functions automatically assign new unique names to layers that have the name "".

    When you compress a neural network using the compressNetworkUsingProjection, the function replaces projectable layers with ProjectedLayer objects with the same name.

    The ProjectedLayer object stores this property as a character vector.

    Data Types: char | string

    This property is read-only.

    Number of inputs to the layer, returned as a positive integer.

    Data Types: double

    Input names, returned as a cell array of character vectors.

    Data Types: cell

    This property is read-only.

    Number of outputs from the layer, returned as a positive integer.

    Data Types: double

    Output names, returned as a cell array of character vectors.

    Data Types: cell

    Examples

    collapse all

    Load the pretrained network in dlnetJapaneseVowels and the training data in JapaneseVowelsTrainData.

    load dlnetJapaneseVowels
    load JapaneseVowelsTrainData

    Create a mini-batch queue containing the training data. To create a mini-batch queue from in-memory data, convert the sequences to an array datastore.

    adsXTrain = arrayDatastore(XTrain,OutputType="same");

    Create the minibatchqueue object.

    • Specify a mini-batch size of 16.

    • Preprocess the mini-batches using the preprocessMiniBatchPredictors function, listed in the Mini-Batch Predictors Preprocessing Function section of the example.

    • Specify that the output data has format "CTB" (channel, time, batch).

    mbq = minibatchqueue(adsXTrain, ...
        MiniBatchSize=16, ...
        MiniBatchFcn=@preprocessMiniBatchPredictors, ...
        MiniBatchFormat="CTB");

    Compress the network.

    [netProjected,info] = compressNetworkUsingProjection(net,mbq);
    Compressed network has 83.4% fewer learnable parameters.
    Projection compressed 2 layers: "lstm","fc"
    

    View the network layers.

    netProjected.Layers
    ans = 
      4x1 Layer array with layers:
    
         1   'sequenceinput'   Sequence Input    Sequence input with 12 dimensions
         2   'lstm'            Projected Layer   Projected LSTM with 100 hidden units
         3   'fc'              Projected Layer   Projected fully connected layer with output size 9
         4   'softmax'         Softmax           softmax
    

    View the projected LSTM layer. The LearnablesReduction property shows the proportion of learnables removed in the layer. The Network property contains the neural network that represents the projection.

    netProjected.Layers(2)
    ans = 
      ProjectedLayer with properties:
    
                       Name: 'lstm'
              OriginalClass: 'nnet.cnn.layer.LSTMLayer'
        LearnablesReduction: 0.8408
                  InputSize: 12
                 OutputSize: 100
    
       Hyperparameters
         InputProjectorSize: 8
        OutputProjectorSize: 7
    
       Learnable Parameters
                    Network: [1x1 dlnetwork]
    
       Network Learnable Parameters
         Network/lstm/InputWeights      400x8 dlarray
         Network/lstm/RecurrentWeights  400x7 dlarray
         Network/lstm/Bias              400x1 dlarray
         Network/lstm/InputProjector    12x8  dlarray
         Network/lstm/OutputProjector   100x7 dlarray
    
       Network State Parameters
         Network/lstm/HiddenState  100x1 dlarray
         Network/lstm/CellState    100x1 dlarray
    
    Use properties method to see a list of all properties.
    
    

    Mini-Batch Predictors Preprocessing Function

    The preprocessMiniBatchPredictors function preprocesses a mini-batch of predictors by extracting the sequence data from the input cell array and truncating them along the second dimension so that they have the same length.

    Note: Do not pad sequence data when doing the PCA step for projection as this can negatively impact the analysis. Instead, truncate mini-batches of data to have the same length or use mini-batches of size 1.

    function X = preprocessMiniBatchPredictors(dataX)
    
    X = padsequences(dataX,2,Length="shortest");
    
    end

    Load the pretrained network in dlnetProjectedJapaneseVowels.

    load dlnetProjectedJapaneseVowels

    View the network properties.

    net
    net = 
      dlnetwork with properties:
    
             Layers: [4x1 nnet.cnn.layer.Layer]
        Connections: [3x2 table]
         Learnables: [9x3 table]
              State: [2x3 table]
         InputNames: {'sequenceinput'}
        OutputNames: {'softmax'}
        Initialized: 1
    
      View summary with summary.
    
    

    View the network layers. The network has two projected layers.

    net.Layers
    ans = 
      4x1 Layer array with layers:
    
         1   'sequenceinput'   Sequence Input    Sequence input with 12 dimensions
         2   'lstm'            Projected Layer   Projected LSTM with 100 hidden units
         3   'fc'              Projected Layer   Projected fully connected layer with output size 9
         4   'softmax'         Softmax           softmax
    

    Unpack the projected layers.

    netUnpacked = unpackProjectedLayers(net)
    netUnpacked = 
      dlnetwork with properties:
    
             Layers: [5x1 nnet.cnn.layer.Layer]
        Connections: [4x2 table]
         Learnables: [9x3 table]
              State: [2x3 table]
         InputNames: {'sequenceinput'}
        OutputNames: {'softmax'}
        Initialized: 1
    
      View summary with summary.
    
    

    View the unpacked network layers. The unpacked network has a projected LSTM layer and two fully connected layers in place of the projected layers.

    netUnpacked.Layers
    ans = 
      5x1 Layer array with layers:
    
         1   'sequenceinput'   Sequence Input    Sequence input with 12 dimensions
         2   'lstm'            Projected LSTM    Projected LSTM layer with 100 hidden units, an output projector size of 7, and an input projector size of 8
         3   'fc_proj_in'      Fully Connected   4 fully connected layer
         4   'fc_proj_out'     Fully Connected   9 fully connected layer
         5   'softmax'         Softmax           softmax
    

    Tips

    • Code generation does not support ProjectedLayer objects. To replace ProjectedLayer objects in a neural network with the equivalent neural network that represents the projection, use the unpackProjectedLayers function or set the UnpackProjectedLayers option of the compressNetworkUsingProjection function to 1 (true).

    Algorithms

    expand all

    References

    [1] "Compressing Neural Networks Using Network Projection." Accessed July 20, 2023. https://www.mathworks.com/company/newsletters/articles/compressing-neural-networks-using-network-projection.html.

    Version History

    Introduced in R2023b