Contenuto principale

loadTFLiteModel

Load TensorFlow Lite model

Since R2022a

Description

Add-On Required: This feature requires the Deep Learning Toolbox Interface for LiteRT Library add-on.

net = loadTFLiteModel(modelFileName) loads a pretrained TensorFlow™ Lite model file modelFileName and returns a TFLiteModel object.

Use this TFLiteModel object with the predict function in your MATLAB® code to perform inference in MATLAB execution, code generation, or MATLAB Function block in Simulink® models. For more information, see Prerequisites for Deep Learning with TensorFlow Lite Models.

example

___ = loadTFLiteModel(___, Name=Value) loads a pretrained TensorFlow Lite model file with specified options using one or more name-value arguments in addition to the input arguments in previous syntaxes. For example, when you need to maintain and produce the same data layout as the pretrained TFLite model, set PreserveDataFormats to true.

Examples

collapse all

Suppose that your current working directory contains a TensorFlow Lite Model named mobilenet_v1_0.5_224.tflite.

Load the model by using the loadTFLite function. Inspect the object this function creates.

net = loadTFLiteModel('mobilenet_v1_0.5_224.tflite');
disp(net)
  TFLiteModel with properties:
            ModelName: 'mobilenet_v1_0.5_224.tflite'
            NumInputs: 1
           NumOutputs: 1
            InputSize: {[224 224 3]}
           OutputSize: {[1001 1]}
           NumThreads: 8
                 Mean: 127.5000
    StandardDeviation: 127.5000

Create a MATLAB function that can perform inference using the object net. This function loads the Mobilenet-V1 model into a persistent network object. Then the function performs prediction by passing the network object to the predict function. Subsequent calls to this function reuse this the persistent object.

function out = tflite_predict(in)
persistent net;
if isempty(net)
    net = loadTFLiteModel('mobilenet_v1_0.5_224.tflite');
end
out = predict(net,in);
end

For an example that shows how to generate code for this function and deploy on Raspberry Pi® hardware, see Generate Code for TensorFlow Lite (TFLite) Model and Deploy on Raspberry Pi.

Note

By default, the Mean and StandardDeviation properties of a TFLiteModel object are both set to 127.5. To change these default values after you create the object, make assignments by using the dot notation. For example:

net.Mean = 0;
net.StandardDeviation = 1;

If the input data is not normalized, you must set Mean to 0 and StandardDeviation to 1. Otherwise, set these properties based on how the input data is normalized.

Input Arguments

collapse all

Name of the TensorFlow Lite model file, specified as a character vector or a string scalar.

Name-Value Arguments

collapse all

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: PreserveDataFormats="true" preserves the layouts of the input and output data to match the layout of the pretrained TFLite model.

You must specify this option as a compile-time constant for code generation workflow. Set PreserveDataFormats explicitly to true only when you need to maintain and produce the same data layout as the pretrained TensorFlow Lite model.

Since R2026a

Number of computational threads used for running inference with the TensorFlow Lite model, specified as a positive integer-valued numeric scalar.

The default value can change depending on the target that you specify:

  • When you run the TFLite model in MATLAB, the default value is equal to the value returned by the maxNumCompThreads function.

  • If code generation target is the MATLAB host computer, the default value is equal to the value returned by the maxNumCompThreads function.

  • If code generation target is the a non-host hardware board, the default value is set to 4.

Output Arguments

collapse all

TFLiteModel object that represents the TensorFlow Lite model file.

Extended Capabilities

expand all

Version History

Introduced in R2022a

expand all