Main Content

mapeMetric

Deep learning mean absolute percentage error metric

Since R2024b

    Description

    Use a MAPEMetric object to track the mean absolute percentage error (MAPE) when you train or test a deep neural network.

    To specify which metrics to use during training, specify the Metrics option of the trainingOptions function. You can use this option only when you train a network using the trainnet function.

    To plot the metrics during training, in the training options, specify Plots as "training-progress". If you specify the ValidationData training option, then the software also plots and records the metric values for the validation data. To output the metric values to the Command Window during training, in the training options, set Verbose to true.

    You can also access the metrics after training using the TrainingHistory and ValidationHistory fields from the second output of the trainnet function.

    To specify which metrics to use when you test a neural network, use the metrics argument of the testnet function.

    Creation

    Description

    metric = mapeMetric creates a MAPEMetric object. You can then specify metric as the Metrics name-value argument in the trainingOptions function or the metrics argument of the testnet function. With no additional options specified, this syntax is equivalent to specifying the metric as "mape".

    example

    metric = mapeMetric(Name=Value) sets the Name, NetworkOutput, and NormalizationFactor properties using name-value arguments.

    Properties

    expand all

    Metric name, specified as a string scalar or character vector. The metric name appears in the training plot, the verbose output, the training information that you can access as the second output of the trainnet function, and table output of the testnet function.

    Data Types: char | string

    This property is read-only.

    Name of the layer to apply the metric to, specified as [], a string scalar, or a character vector. When the value is [], the software passes all of the network outputs to the metric.

    Note

    You can apply the built-in metric to only a single output. If you have a network with multiple outputs, then you must specify the NetworkOutput name-value argument. To apply built-in metrics to multiple outputs, you must create a metric object for each output.

    Data Types: char | string

    This property is read-only.

    Divisor for normalizing the metric sum, specified as one of these values:

    • "batch-size" — Divide the metric sum by the number of elements in the targets.

    • "all-elements" — Divide the metric sum by the number of observations.

    Data Types: char | string

    This property is read-only.

    Flag to maximize metric, specified as 0 (false) if the optimal value for the metric occurs when the metric is minimized.

    For this metric, the Maximize value is always set to 0 (false).

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical

    Object Functions

    trainingOptionsOptions for training deep learning neural network
    trainnetTrain deep learning neural network

    Examples

    collapse all

    Plot and record the training and validation MAPE when you train a deep neural network.

    Load the training and test data from the DigitsDataTrain and DigitsDataTest MAT files, respectively. The data set contains synthetic images of handwritten digits and the corresponding angles (in degrees) by which each image is rotated. The range The anglesTrain and anglesTest variables are the rotation angles in degrees.

    load DigitsDataTrain.mat
    load DigitsDataTest.mat

    You can train a deep learning network to predict the rotation angle of the digit.

    Create an image regression network.

    layers = [
        imageInputLayer([28 28 1])
        convolution2dLayer(3,8,Padding="same")
        batchNormalizationLayer
        reluLayer
        averagePooling2dLayer(2,'Stride',2)
        convolution2dLayer(3,16,Padding="same")
        batchNormalizationLayer
        reluLayer
        averagePooling2dLayer(2,'Stride',2)
        convolution2dLayer(3,32,Padding="same")
        batchNormalizationLayer
        reluLayer
        convolution2dLayer(3,32,Padding="same")
        batchNormalizationLayer
        reluLayer
        dropoutLayer(0.2)
        fullyConnectedLayer(1)];

    Create a MAPEMetric object and set the NormalizationFactor property to "batch-size". You can use this object to record and plot the training and validation MAPE.

    metric = mapeMetric(NormalizationFactor="batch-size")
    metric = 
      MAPEMetric with properties:
    
                       Name: "MAPE"
        NormalizationFactor: "batch-size"
              NetworkOutput: []
                   Maximize: 0
    

    The rotation value for each image is between -45 and 45 degrees. Because the MAPE metric is sensitive to values close to zero, shift the angle values by 360 degrees.

    anglesTrain = anglesTrain + 360; 
    anglesTest = anglesTest + 360; 

    Specify the MAPE metric in the training options. To plot the MAPE during training, set Plots to "training-progress". To output the values during training, set Verbose to true.

    options = trainingOptions("adam", ...
        MaxEpochs=10, ...
        Metrics=metric, ...
        ValidationData={XTest,anglesTest}, ...
        ValidationFrequency=50, ...
        Plots="training-progress", ...
        Verbose=true);

    Train the network using the trainnet function.

    [net,info] = trainnet(XTrain,anglesTrain,layers,"mse",options);
        Iteration    Epoch    TimeElapsed    LearnRate    TrainingLoss    ValidationLoss    TrainingMAPE    ValidationMAPE
        _________    _____    ___________    _________    ____________    ______________    ____________    ______________
                0        0       00:00:24        0.001                        1.3056e+05                           0.99992
                1        1       00:00:25        0.001      1.3106e+05                            0.9989                  
               50        2       00:00:50        0.001           94497             98734         0.84475           0.86825
              100        3       00:00:52        0.001           58349             62901         0.66741           0.69134
              150        4       00:00:54        0.001           29242             30337         0.47612           0.47729
              200        6       00:00:57        0.001           13591             12694         0.31457           0.30623
              250        7       00:00:59        0.001          3847.2            3281.8         0.16551           0.15135
              300        8       00:01:01        0.001          1016.1            984.87        0.075017          0.078486
              350        9       00:01:03        0.001          293.29            162.27         0.03775          0.027857
              390       10       00:01:04        0.001          234.13            398.57        0.034001          0.045649
    Training stopped: Max epochs completed
    

    Access the loss and MAPE values for the validation data.

    info.ValidationHistory
    ans=9×3 table
        Iteration       Loss         MAPE  
        _________    __________    ________
             0       1.3056e+05     0.99992
            50            98734     0.86825
           100            62901     0.69134
           150            30337     0.47729
           200            12694     0.30623
           250           3281.8     0.15135
           300           984.87    0.078486
           350           162.27    0.027857
           390           398.57    0.045649
    
    

    More About

    expand all

    Version History

    Introduced in R2024b