estimateNetworkMetrics
Syntax
Description
returns a table containing these estimated layer-wise metrics for a deep neural network:dataTable
= estimateNetworkMetrics(net
)
LayerName
— Name of layerLayerType
— Type of layerNumberOfLearnables
— Number of non-zero learnable parameters (weights and biases) in the networkNumberOfOperations
— Total number of multiplications and additionsParameterMemory (MB)
— Memory required to store all of the learnable parametersNumberOfMACs
— Number of multiply-accumulate operationsArithmeticIntensity (FLOP/B)
— Amount of reuse of data fetched from memory, measured as the number of floating-point operations performed per the bytes of memory access required to support those operations. For example, convolutional layers reuse the same weight data across computations for multiple input features, resulting in a relatively high arithmetic intensity.
This function estimates metrics for learnable layers, which have weights and bias, in the network. Estimated metrics are provided for the following supported layers.
[
returns metrics for multiple networks.dataTable1,dataTable2,…,dataTableN
] = estimateNetworkMetrics(net1,net2,…,netN
)
This function requires the Deep Learning Toolbox Model Compression Library. To learn about the products required to quantize a deep neural network, see Quantization Workflow Prerequisites.
Examples
Input Arguments
Version History
Introduced in R2022a