Main Content

nlmpc

Nonlinear model predictive controller

Description

A nonlinear model predictive controller computes optimal control moves across the prediction horizon using a nonlinear prediction model, a nonlinear cost function, and nonlinear constraints. For more information on nonlinear MPC, see Nonlinear MPC.

Creation

Description

example

nlobj = nlmpc(nx,ny,nu) creates an nlmpc object whose prediction model has nx states, ny outputs, and nu inputs, where all inputs are manipulated variables. Use this syntax if your model has no measured or unmeasured disturbance inputs.

nlobj = nlmpc(nx,ny,'MV',mvIndex,'MD',mdIndex) creates an nlmpc object whose prediction model has measured disturbance inputs. Specify the input indices for the manipulated variables, mvIndex, and measured disturbances, mdIndex.

nlobj = nlmpc(nx,ny,'MV',mvIndex,'UD',udIndex) creates an nlmpc object whose prediction model has unmeasured disturbance inputs. Specify the input indices for the manipulated variables and unmeasured disturbances, udIndex.

example

nlobj = nlmpc(nx,ny,'MV',mvIndex,'MD',mdIndex,'UD',udIndex) creates an nlmpc object whose prediction model has both measured and unmeasured disturbance inputs. Specify the input indices for the manipulated variables, measured disturbances, and unmeasured disturbances.

Input Arguments

expand all

Number of prediction model states, specified as a positive integer. This value is stored in the Dimensions.NumberOfStates controller read-only property. You cannot change the number of states after creating the controller object.

Example: 6

Number of prediction model outputs, specified as a positive integer. This value is stored in the Dimensions.NumberOfOutputs controller read-only property. You cannot change the number of outputs after creating the controller object.

Example: 2

Number of prediction model inputs, which are all set to be manipulated variables, specified as a positive integer. This value is stored in the Dimensions.NumberOfInputs controller read-only property. You cannot change the number of manipulated variables after creating the controller object.

Example: 4

Manipulated variable indices, specified as a vector of positive integers. This value is stored in the Dimensions.MVIndex controller read-only property. You cannot change these indices after creating the controller object.

The combined set of indices from mvIndex, mdIndex, and udIndex must contain all integers from 1 through Nu, where Nu is the number of prediction model inputs.

Example: [1 3]

Measured disturbance indices, specified as a vector of positive integers. This value is stored in the Dimensions.MDIndex controller read-only property. You cannot change these indices after creating the controller object.

The combined set of indices from mvIndex, mdIndex, and udIndex must contain all integers from 1 through Nu, where Nu is the number of prediction model inputs.

Example: 2

Unmeasured disturbance indices, specified as a vector of positive integers. This value is stored in the Dimensions.UDIndex controller read-only property. You cannot change these indices after creating the controller object.

The combined set of indices from mvIndex, mdIndex, and udIndex must contain all integers from 1 through Nu, where Nu is the number of prediction model inputs.

Example: 4

Properties

expand all

Prediction model sample time, specified as a positive finite scalar. The controller uses a discrete-time model with a sample time of Ts for prediction. If you specify a continuous-time prediction model (Model.IsContinuousTime is true), then the controller discretizes the model using the built-in implicit trapezoidal rule with a sample time of Ts.

Example: 0.1

Prediction horizon steps, specified as a positive integer. The product of PredictionHorizon and Ts is the prediction time, that is, how far the controller looks into the future.

Example: 15

Control horizon, specified as one of the following:

  • Positive integer, m, between 1 and p, inclusive, where p is equal to PredictionHorizon. In this case, the controller computes m free control moves occurring at times k through k+m–1, and holds the controller output constant for the remaining prediction horizon steps from k+m through k+p–1. Here, k is the current control interval.

  • Vector of positive integers [m1, m2, …], specifying the lengths of blocking intervals. By default the controller computes M blocks of free moves, where M is the number of blocking intervals. The first free move applies to times k through k+m1–1, the second free move applies from time k+m1 through k+m1+m2–1, and so on. Using block moves can improve the robustness of your controller. The sum of the values in ControlHorizon must match the prediction horizon p. If you specify a vector whose sum is:

    • Less than the prediction horizon, then the controller adds a blocking interval. The length of this interval is such that the sum of the interval lengths is p. For example, if p=10 and you specify a control horizon of ControlHorizon=[1 2 3], then the controller uses four intervals with lengths [1 2 3 4].

    • Greater than the prediction horizon, then the intervals are truncated until the sum of the interval lengths is equal to p. For example, if p=10 and you specify a control horizon of ControlHorizon= [1 2 3 6 7], then the controller uses four intervals with lengths [1 2 3 4].

Piecewise constant blocking moves are often too restrictive for optimal path planning applications. To produce a less-restrictive, better-conditioned nonlinear programming problem, you can specify piecewise linear manipulated variable blocking intervals. To do so, set the Optimization.MVInterpolationOrder property of your nlmpc controller object to 1.

For more information on how manipulated variable blocking works with different interpolation methods, see Manipulated Variable Blocking.

Example: 3

This property is read-only.

Prediction model dimensional information, specified when you create the controller and stored as a structure with the following fields.

Number of states in the prediction model, specified as a positive integer. This value corresponds to nx.

Example: 6

Number of outputs in the prediction model, specified as a positive integer. This value corresponds to ny.

Example: 1

Number of inputs in the prediction model, specified as a positive integer. This value corresponds to either nu or the sum of the lengths of mvIndex, mdIndex, and udIndex.

Example: 3

Manipulated variable indices for the prediction model, specified as a vector of positive integers. This value corresponds to mvIndex.

Example: [1 2]

Measured disturbance indices for the prediction model, specified as a vector of positive integers. This value corresponds to mdIndex.

Example: 4

Unmeasured disturbance indices for the prediction model, specified as a vector of positive integers. This value corresponds to udIndex.

Example: 3

Prediction model, specified as a structure with the following fields.

State function, specified as a string, character vector, or function handle. For a continuous-time prediction model, StateFcn is the state derivative function. For a discrete-time prediction model, StateFcn is the state update function.

If your state function is continuous-time, the controller automatically discretizes the model using the implicit trapezoidal rule. This method can handle moderately stiff models, and its prediction accuracy depends on the controller sample time Ts; that is, a large sample time leads to inaccurate prediction.

If the default discretization method does not provide satisfactory prediction for your application, you can specify your own discrete-time prediction model that uses a different method, such as the multistep forward Euler rule.

You can specify your state function in one of the following ways:

  • Name of a function in the current working folder or on the MATLAB® path, specified as a string or character vector

    Model.StateFcn = "myStateFunction";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Model.StateFcn = @myStateFunction;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Model.StateFcn = @(x,u,params) myStateFunction(x,u,params)

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

For more information, see Specify Prediction Model for Nonlinear MPC.

Example: "@transFcn"

Output function, specified as a string, character vector, or function handle. If the number of states and outputs of the prediction model are the same, you can omit OutputFcn, which implies that all states are measurable; that is, each output corresponds to one state.

Note

Your output function cannot have direct feedthrough from any manipulated variable to any output at any time.

You can specify your output function in one of the following ways:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Model.OutputFcn = "myOutputFunction";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Model.OutputFcn = @myOutputFunction;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Model.OutputFcn = @(x,u,params) myOutputFunction(x,u,params)

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

For more information, see Specify Prediction Model for Nonlinear MPC.

Example: "@outFcn"

Option to indicate prediction model time domain, specified as one of the following:

  • true — Continuous-time prediction model. In this case, the controller automatically discretizes the model during prediction using Ts.

  • false — Discrete-time prediction model. In this case, Ts is the sample time of the model.

Note

IsContinuousTime must be consistent with the functions specified in Model.StateFcn and Model.OutputFcn.

If IsContinuousTime is true, StateFcn must return the derivative of the state with respect to time, at the current time. Otherwise StateFcn must return the state at the next control interval.

Example: true

Number of optional model parameters used by the prediction model, custom cost function, custom constraints, passivity functions, specified as a nonnegative integer. The number of parameters includes all the parameters used by these functions. For example, if the state function uses only parameter p1, the constraint functions use only parameter p2, and the cost function uses only parameter p3, then NumberOfParameters is 3.

Example: 1

State information, bounds, and scale factors, specified as a structure array with Nx elements, where Nx is the number of states. Each structure element has the following fields.

State lower bound, specified as a scalar or vector. By default, this lower bound is -Inf.

To use the same bound across the prediction horizon, specify a scalar value.

To vary the bound over the prediction horizon from time k+1 to time k+p, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

State bounds are always hard constraints.

Example: [-20 -18 -15]

State upper bound, specified as a scalar or vector. By default, this upper bound is +Inf.

To use the same bound across the prediction horizon, specify a scalar value.

To vary the bound over the prediction horizon from time k+1 to time k+p, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

State bounds are always hard constraints.

Example: [20 15]

State name, specified as a string or character vector. The default state name is "x#", where # is its state index.

Example: "speed"

State units, specified as a string or character vector.

Example: "m/s"

State scale factor, specified as a positive finite scalar. In general, use the operating range of the state. Specifying the proper scale factor can improve numerical conditioning for optimization.

Example: 10

Output variable (OV) information, bounds, and scale factors, specified as a structure array with Ny elements, where Ny is the number of output variables. To access this property, you can use the alias OV instead of OutputVariables.

Each structure element has the following fields.

OV lower bound, specified as a scalar or vector. By default, this lower bound is -Inf.

To use the same bound across the prediction horizon, specify a scalar value.

To vary the bound over the prediction horizon from time k+1 to time k+p, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

Example: [-10 -8]

OV upper bound, specified as a scalar or vector. By default, this upper bound is +Inf.

To use the same bound across the prediction horizon, specify a scalar value.

To vary the bound over the prediction horizon from time k+1 to time k+p, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

Example: [12 10 8]

OV lower bound softness, where a larger ECR value indicates a softer constraint, specified as a nonnegative finite scalar or vector. By default, OV upper bounds are soft constraints.

To use the same ECR value across the prediction horizon, specify a scalar value.

To vary the ECR value over the prediction horizon from time k+1 to time k+p, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final ECR value is used for the remaining steps of the prediction horizon.

Example: [2 1 0.5]

OV upper bound softness, where a larger ECR value indicates a softer constraint, specified as a nonnegative finite scalar or vector. By default, OV lower bounds are soft constraints.

To use the same ECR value across the prediction horizon, specify a scalar value.

To vary the ECR value over the prediction horizon from time k+1 to time k+p, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final ECR value is used for the remaining steps of the prediction horizon.

Example: [5 2 1]

OV name, specified as a string or character vector. The default OV name is "y#", where # is its output index.

Example: "attack angle"

OV units, specified as a string or character vector.

Example: "degrees"

OV scale factor, specified as a positive finite scalar. In general, use the operating range of the output variable. Specifying the proper scale factor can improve numerical conditioning for optimization.

Example: 90

Manipulated Variable (MV) information, bounds, and scale factors, specified as a structure array with Nmv elements, where Nmv is the number of manipulated variables. To access this property, you can use the alias MV instead of ManipulatedVariables.

Each structure element has the following fields.

MV lower bound, specified as a scalar or vector. By default, this lower bound is -Inf.

To use the same bound across the prediction horizon, specify a scalar value.

To vary the bound over the prediction horizon from time k to time k+p–1, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

Example: [-1.1 -1]

MV upper bound, specified as a scalar or vector. By default, this upper bound is +Inf.

To use the same bound across the prediction horizon, specify a scalar value.

To vary the bound over the prediction horizon from time k to time k+p–1, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

Example: [1.2 1]

MV lower bound softness, where a larger ECR value indicates a softer constraint, specified as a nonnegative scalar or vector. By default, MV lower bounds are hard constraints.

To use the same ECR value across the prediction horizon, specify a scalar value.

To vary the ECR value over the prediction horizon from time k to time k+p–1, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final ECR value is used for the remaining steps of the prediction horizon.

Example: [0.1 0]

MV upper bound softness, where a larger ECR value indicates a softer constraint, specified as a nonnegative scalar or vector. By default, MV upper bounds are hard constraints.

To use the same ECR value across the prediction horizon, specify a scalar value.

To vary the ECR value over the prediction horizon from time k to time k+p–1, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final ECR value is used for the remaining steps of the prediction horizon.

Example: [0.5 0.2]

MV rate of change lower bound, specified as a nonpositive scalar or vector. The MV rate of change is defined as MV(k) - MV(k–1), where k is the current time. By default, this lower bound is -Inf.

To use the same bound across the prediction horizon, specify a scalar value.

To vary the bound over the prediction horizon from time k to time k+p–1, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

Example: [-50 -20]

MV rate of change upper bound, specified as a nonnegative scalar or vector. The MV rate of change is defined as MV(k) - MV(k–1), where k is the current time. By default, this upper bound is +Inf.

To use the same bound across the prediction horizon, specify a scalar value.

To vary the bound over the prediction horizon from time k to time k+p–1, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final bound is used for the remaining steps of the prediction horizon.

Example: [50 20]

MV rate of change lower bound softness, where a larger ECR value indicates a softer constraint, specified as a nonnegative finite scalar or vector. By default, MV rate of change lower bounds are hard constraints.

To use the same ECR value across the prediction horizon, specify a scalar value.

To vary the ECR values over the prediction horizon from time k to time k+p–1, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final ECR values are used for the remaining steps of the prediction horizon.

Example: [0.1 0]

MV rate of change upper bound softness, where a larger ECR value indicates a softer constraint, specified as a nonnegative finite scalar or vector. By default, MV rate of change upper bounds are hard constraints.

To use the same ECR value across the prediction horizon, specify a scalar value.

To vary the ECR values over the prediction horizon from time k to time k+p–1, specify a vector of up to p values. Here, k is the current time and p is the prediction horizon. If you specify fewer than p values, the final ECR values are used for the remaining steps of the prediction horizon.

Example: [1 0.5 0.2]

MV name, specified as a string or character vector. The default MV name is "u#", where # is its input index.

Example: "rudder angle"

MV units, specified as a string or character vector.

Example: "degrees"

MV scale factor, specified as a positive finite scalar. In general, use the operating range of the manipulated variable. Specifying the proper scale factor can improve numerical conditioning for optimization.

Example: 60

Measured disturbance (MD) information and scale factors, specified as a structure array with Nmd elements, where Nmd is the number of measured disturbances. If your model does not have measured disturbances, then MeasuredDisturbances is []. To access this property, you can use the alias MD instead of MeasuredDisturbances.

Each structure element has the following fields.

MD name, specified as a string or character vector. The default MD name is "u#", where # is its input index.

Example: "wind speed"

MD units, specified as a string or character vector.

Example: "m/s"

MD scale factor, specified as a positive finite scalar. In general, use the operating range of the disturbance. Specifying the proper scale factor can improve numerical conditioning for optimization.

Example: 10

Standard cost function tuning weights, specified as a structure. The controller applies these weights to the scaled variables. Therefore, the tuning weights are dimensionless values.

Note

If you define a custom cost function using Optimization.CustomCostFcn and set Optimization.ReplaceStandardCost to true, then the controller ignores the standard cost function tuning weights in Weights.

Weights has the following fields.

Manipulated variable tuning weights, which penalize deviations from MV targets, specified as a row vector or array of nonnegative values. The default weight for all manipulated variables is 0.

To use the same weights across the prediction horizon, specify a row vector of length Nmv, where Nmv is the number of manipulated variables.

To vary the tuning weights over the prediction horizon from time k to time k+p-1, specify an array with Nmv columns and up to p rows. Here, k is the current time and p is the prediction horizon. Each row contains the manipulated variable tuning weights for one prediction horizon step. If you specify fewer than p rows, the weights in the final row are used for the remaining steps of the prediction horizon.

To specify MV targets at run time, in Simulink®, pass the target values to the Nonlinear MPC Controller block. In MATLAB, pass the target values to a simulation function (such as nlmpcmove, using the MVTarget property of an nlmpcmoveopt object).

Example: [0.1 0.2]

Manipulated variable rate tuning weights, which penalize large changes in control moves, specified as a row vector or array of nonnegative values. The default weight for all manipulated variable rates is 0.1.

To use the same weights across the prediction horizon, specify a row vector of length Nmv, where Nmv is the number of manipulated variables.

To vary the tuning weights over the prediction horizon from time k to time k+p-1, specify an array with Nmv columns and up to p rows. Here, k is the current time and p is the prediction horizon. Each row contains the manipulated variable rate tuning weights for one prediction horizon step. If you specify fewer than p rows, the weights in the final row are used for the remaining steps of the prediction horizon.

Example: [0.1 0.1]

Output variable tuning weights, which penalize deviation from output references, specified as a row vector or array of nonnegative values. The default weight for all output variables is 1.

To use the same weights across the prediction horizon, specify a row vector of length Ny, where Ny is the number of output variables.

To vary the tuning weights over the prediction horizon from time k+1 to time k+p, specify an array with Ny columns and up to p rows. Here, k is the current time and p is the prediction horizon. Each row contains the output variable tuning weights for one prediction horizon step. If you specify fewer than p rows, the weights in the final row are used for the remaining steps of the prediction horizon.

Example: [0.1 0.1]

Slack variable tuning weight, specified as a positive scalar.

Example: 1e4

Custom optimization functions and solver, specified as a structure with the following fields.

Custom cost function, specified as one of the following:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Optimization.CustomCostFcn = "myCostFunction";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Optimization.CustomCostFcn = @myCostFunction;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Optimization.CustomCostFcn = @(X,U,e,data,params) myCostFunction(X,U,e,data,params);

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

Your cost function must have the signature:

function J = myCostFunction(X,U,e,data,params)

For more information, see Specify Cost Function for Nonlinear MPC.

Example: @costFcn

Option to replace the standard cost function with the custom cost function, specified as one of the following:

  • true — The controller uses the custom cost alone as the objective function during optimization. In this case, the Weights property of the controller is ignored.

  • false — The controller uses the sum of the standard cost and custom cost as the objective function during optimization.

If you do not specify a custom cost function using CustomCostFcn, then the controller ignores RepalceStandardCost.

For more information, see Specify Cost Function for Nonlinear MPC.

Example: true

Custom equality constraint function, specified as one of the following:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Optimization.CustomEqConFcn = "myEqConFunction";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Optimization.CustomEqConFcn = @myEqConFunction;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Optimization.CustomEqConFcn = @(X,U,data,params) myEqConFunction(X,U,data,params);

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

Your equality constraint function must have the signature:

function ceq = myEqConFunction(X,U,data,params)

For more information, see Specify Constraints for Nonlinear MPC.

Example: @eqFcn

Custom inequality constraint function, specified as one of the following:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Optimization.CustomIneqConFcn = "myIneqConFunction";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Optimization.CustomIneqConFcn = @myIneqConFunction;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Optimization.CustomIneqConFcn = @(X,U,e,data,params) myIneqConFunction(X,U,e,data,params);

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

Your equality constraint function must have the signature:

function cineq = myIneqConFunction(X,U,e,data,params)

For more information, see Specify Constraints for Nonlinear MPC.

Example: @ineqFcn

Custom nonlinear programming solver function, specified as a string, character vector, or function handle. If you do not have Optimization Toolbox™ software, you must specify your own custom nonlinear programming solver. You can specify your custom solver function in one of the following ways:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Optimization.CustomSolverFcn = "myNLPSolver";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Optimization.CustomSolverFcn = @myNLPSolver;

For more information, see Configure Optimization Solver for Nonlinear MPC.

Example: @mySolver

Solver options, specified as an options object for fmincon or [].

If you have Optimization Toolbox software, SolverOptions contains an options object for the fmincon solver.

If you do not have Optimization Toolbox, SolverOptions is [].

For more information, see Configure Optimization Solver for Nonlinear MPC.

Option to simulate as a linear controller, specified as one of the following:

  • "off" — Simulate the controller as a nonlinear controller with a nonlinear prediction model.

  • "Adaptive" — For each control interval, a linear model is obtained from the specified nonlinear state and output functions at the current operating point and used across the prediction horizon. To determine if an adaptive MPC controller provides comparable performance to the nonlinear controller, use this option. For more information on adaptive MPC, see Adaptive MPC.

  • "TimeVarying" — For each control interval, p linear models are obtained from the specified nonlinear state and output functions at the p operating points predicted from the previous interval, one for each prediction horizon step. To determine if a linear time-varying MPC controller provides comparable performance to the nonlinear controller, use this option. For more information on time-varying MPC, see Time-Varying MPC.

To use the either the "Adaptive" or "TimeVarying" option, your controller must have no custom constraints and no custom cost function.

For an example that simulates a nonlinear MPC controller as a linear controller, see Optimization and Control of a Fed-Batch Reactor Using Nonlinear MPC.

Example: "Adaptive"

Option to accept a suboptimal solution, specified as a logical value. When the nonlinear programming solver reaches the maximum number of iterations without finding a solution (the exit flag is 0), the controller:

  • Freezes the MV values if UseSuboptimalSolution is false

  • Applies the suboptimal solution found by the solver after the final iteration if UseSuboptimalSolution is true

To specify the maximum number of iterations, use Optimization.SolverOptions.MaxIter.

Example: true

Linear interpolation order used by block moves, specified as one of the following:

  • 0 — Use piecewise constant manipulated variable intervals.

  • 1 — Use piecewise linear manipulated variable intervals.

If the control horizon is a scalar, then the controller ignores MVInterpolationOrder.

For more information on manipulated variable blocking, see Manipulated Variable Blocking.

Example: 1

Jacobians of model functions, and custom cost and constraint functions, specified as a structure. As a best practice, use Jacobians whenever they are available, since they improve optimization efficiency. If you do not specify a Jacobian for a given function, the nonlinear programming solver must numerically compute the Jacobian.

The Jacobian structure contains the following fields.

Jacobian of state function z from Model.StateFcn, specified as one of the following

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Model.StateFcn = "myStateJacobian";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Model.StateFcn = @myStateJacobian;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Model.StateFcn = @(x,u,params) myStateJacobian(x,u,params)

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

For more information, see Specify Prediction Model for Nonlinear MPC.

Example: @Afcn

Jacobian of output function y from Model.OutputFcn, specified as one of the following:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Model.StateFcn = "myOutputJacobian";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Model.StateFcn = @myOutputJacobian;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Model.StateFcn = @(x,u,params) myOutputJacobian(x,u,params)

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

For more information, see Specify Prediction Model for Nonlinear MPC.

Example: @Cfcn

Jacobian of custom cost function J from Optimization.CustomCostFcn, specified as one of the following:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Jacobian.CustomCostFcn = "myCostJacobian";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Jacobian.CustomCostFcn = @myCostJacobian;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Jacobian.CustomCostFcn = @(X,U,e,data,params) myCostJacobian(X,U,e,data,params)

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

Your cost Jacobian function must have the signature:

function [G,Gmv,Ge] = myCostJacobian(X,U,e,data,params)

For more information, see Specify Cost Function for Nonlinear MPC.

Example: @costJacFcn

Jacobian of custom equality constraints ceq from Optimization.CustomEqConFcn, specified as one of the following:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Jacobian.CustomEqConFcn = "myEqConJacobian";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Jacobian.CustomEqConFcn = @myEqConJacobian;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Jacobian.CustomEqConFcn = @(X,U,data,params) myEqConJacobian(X,U,data,params);

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

Your equality constraint Jacobian function must have the signature:

function [G,Gmv] = myEqConJacobian(X,U,data,params)

For more information, see Specify Constraints for Nonlinear MPC.

Example: @eqJacFcn

Jacobian of custom inequality constraints c from Optimization.CustomIneqConFcn, specified as one of the following:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Jacobian.CustomEqConFcn = "myIneqConJacobian";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Jacobian.CustomEqConFcn = @myIneqConJacobian;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Jacobian.CustomEqConFcn = @(X,U,data,params) myIneqConJacobian(X,U,data,params);

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

Your inequality constraint Jacobian function must have the signature:

function [G,Gmv,Ge] = myIneqConJacobian(X,U,data,params)

For more information, see Specify Constraints for Nonlinear MPC.

Example: @ineqJacFcn

Passivity constraints, specified as a structure with the following fields.

When your nonlinear MPC controller is configured to use passivity constraints, at each step the optimization algorithm tries to enforce the inequality constraints:

yp(x,u)Tup(x,u)+νyyp(x,u)Typ(x,u)+νuup(x,u)Tup(x,u)0.

Here, νy is the output passivity index, νu is the input passivity index, up(x,u) is the passivity input function, and yp(x,u) is the passivity output function. The variables x and u are the current state and input of the prediction model.

Assuming that the plant is already passive with respect to the input-output pair up and yp, if these two inequalities are verified, then (under mild conditions) the resulting closed loop system tends to dissipate energy over time, and therefore has a stable equilibrium. For more information on passivity see Specify Constraints for Nonlinear MPC and, in the context of linear systems, About Passivity and Passivity Indices. For examples, see Control Quadruple-Tank Using Passivity-Based Nonlinear MPC and Control Robot Manipulator Using Passivity-Based Nonlinear MPC.

Option to enforce constraints, specified as one of the following:

  • true — Passivity constraints are enforced during optimization. In this case, you must specify the OutputFcn and InputFcn properties.

  • false — Passivity constraints are not enforced during optimization.

Example: true

Desired output passivity index for the controller, specified as a nonnegative scalar.

If Passivity.EnforceConstraint is true, at each step the optimization algorithm tries to enforce the passivity inequality constraint, which involves the passivity index νy specified in Passivity.OututPassivityIndex.

Example: 1

Desired output passivity index for the controller, specified as a nonnegative scalar.

If Passivity.EnforceConstraint is true, at each step the optimization algorithm tries to enforce the passivity inequality constraint, which involves the passivity index νu specified in Passivity.InputPassivityIndex.

Example: 1

Passivity output function, specified as a string, character vector, or function handle.

If Passivity.EnforceConstraint is true then at each step the optimization algorithm tries to enforce the input and output inequality constraints, which involve the function yp(x,u) specified in Passivity.OutputFcn.

You can specify your passivity output function as one of the following:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Passivity.OutputFcn = "myPassivityOutputFcn";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Passivity.OutputFcn = @myPassivityOutputFcn;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Passivity.OutputFcn = @(x,u,params) myPassivityOutputFcn(x,u,params)

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

Here, x and u are the prediction model states and inputs, respectively, and params is an optional comma separated list of parameters (for example p1,p2,p3) that might be needed by the function you specify. If any of your functions use optional parameters, you must specify the number of parameters using Model.NumberOfParameters. At run time, in Simulink, you then pass these parameters to the Nonlinear MPC Controller block. In MATLAB, you pass the parameters to a simulation function (such as nlmpcmove, using an nlmpcmoveopt option set object).

Example: @ypFcn

Passivity output function, specified as a string, character vector, or function handle. If Passivity.EnforceConstraint is true then at each step the optimization algorithm tries to enforce the input and output inequality constraints, which involve the function up(x,u) specified in Passivity.OutputFcn.

You can specify your passivity input function as one of the following:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Passivity.InputFcn = "myPassivityInputFcn";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Passivity.InputFcn = @myPassivityInputFcn;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Passivity.InputFcn = @(x,u,params) myPassivityInputFcn(x,u,params)

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

Here, x and u are the prediction model states and inputs, respectively, and params is an optional comma separated list of parameters (for example p1,p2,p3) that might be needed by the function you specify. If any of your functions use optional parameters, you must specify the number of parameters using Model.NumberOfParameters. At run time, in Simulink, you then pass these parameters to the Nonlinear MPC Controller block. In MATLAB, you pass the parameters to a simulation function (such as nlmpcmove, using an nlmpcmoveopt option set object).

Example: @upFcn

Jacobian of the passivity output function Passivity.OutputFcn, specified as one of the following:

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Passivity.OutputJacobianFcn = "myPsvOutJacFcn";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Passivity.OutputJacobianFcn = @myPsvOutJacFcn;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Passivity.OutputJacobianFcn = @(x,u,params) myPsvOutJacFcn(x,u,params)

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

Here, x and u are the prediction model states and outputs, respectively, and params is an optional comma separated list (for example p1,p2,p3) of parameters that might be needed by the function you specify. If any of your function use optional parameters, you must specify the number of parameters using Model.NumberOfParameters. At run time, in Simulink, you then pass these parameters to the Nonlinear MPC Controller block. In MATLAB, you pass the parameters to a simulation function (such as nlmpcmove, using an nlmpcmoveopt).

The function specified in Passivity.OutputJacobianFcn (if any) must return as a first output argument the Jacobian matrix of the output passivity function with respect to the current state (an Nyp by Nx matrix), and as a second output argument the Jacobian matrix of the output passivity function with respect to the manipulated variables (an Nyp by Nmv matrix).

Here, Nx is the number of state variables of the prediction model, Nmv is the number of manipulated variables and Nyp is the number of outputs of the passivity output function.

Example: @ypJac

Jacobian of the passivity input function Passivity.InputFcn, specified as one of the following

  • Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector

    Passivity.InputJacobianFcn = "myPsvInJacFcn";
  • Handle to a local function, or a function defined in the current working folder or on the MATLAB path

    Passivity.InputJacobianFcn = @myPsvInJacFcn;

    For more information on local functions, see Local Functions.

  • Anonymous function

    Passivity.InputJacobianFcn = @(x,u,params) myPsvInJacFcn(x,u,params)

    For more information on anonymous functions, see Anonymous Functions.

Note

Only functions defined in a separate file in the current folder or on the MATLAB path are supported for C/C++ code generation. Therefore, specifying state, output, cost, or constraint functions (or their Jacobians) as local or anonymous functions is not recommended.

Here, x and u are the prediction model states and outputs, respectively, and params is an optional comma separated list (for example p1,p2,p3) of parameters that might be needed by the function you specify. If any of your function use optional parameters, you must specify the number of parameters using Model.NumberOfParameters. At run time, in Simulink, you then pass these parameters to the Nonlinear MPC Controller block. In MATLAB, you pass the parameters to a simulation function (such as nlmpcmove, using an nlmpcmoveopt).

The function specified in Passivity.InputJacobianFcn (if any) must return as a first output argument the Jacobian of the input passivity function with respect to the current state (an Nup by Nx matrix), and as a second output argument the Jacobian of the input passivity function with respect to the manipulated variables (an Nup by Nmv matrix).

Here, Nx is the number of state variables of the prediction model, Nmv is the number of manipulated variables and Nup is the number of outputs of the passivity input function.

Example: @upFcn

Option to use predicted or current state, specified as one of the following:

  • truex[k+1] is a decision variable in the optimization problem.

  • falsex[k] is a decision variable in the optimization problem.

Example: true

Object Functions

nlmpcmoveCompute optimal control action for nonlinear MPC controller
validateFcnsExamine prediction model and custom functions of nlmpc or nlmpcMultistage objects for potential problems
convertToMPCConvert nlmpc object into one or more mpc objects
createParameterBusCreate Simulink bus object and configure Bus Creator block for passing model parameters to Nonlinear MPC Controller block

Examples

collapse all

Create a nonlinear MPC controller with four states, two outputs, and one input.

nx = 4;
ny = 2;
nu = 1;
nlobj = nlmpc(nx,ny,nu);
Zero weights are applied to one or more OVs because there are fewer MVs than OVs.

Specify the sample time and horizons of the controller.

Ts = 0.1;
nlobj.Ts = Ts;
nlobj.PredictionHorizon = 10;
nlobj.ControlHorizon = 5;

Specify the state function for the controller, which is in the file pendulumDT0.m. This discrete-time model integrates the continuous time model defined in pendulumCT0.m using a multistep forward Euler method.

nlobj.Model.StateFcn = "pendulumDT0";
nlobj.Model.IsContinuousTime = false;

The discrete-time state function uses an optional parameter, the sample time Ts, to integrate the continuous-time model. Therefore, you must specify the number of optional parameters as 1.

nlobj.Model.NumberOfParameters = 1;

Specify the output function for the controller. In this case, define the first and third states as outputs. Even though this output function does not use the optional sample time parameter, you must specify the parameter as an input argument (Ts).

nlobj.Model.OutputFcn = @(x,u,Ts) [x(1); x(3)];

Validate the prediction model functions for nominal states x0 and nominal inputs u0. Since the prediction model uses a custom parameter, you must pass this parameter to validateFcns.

x0 = [0.1;0.2;-pi/2;0.3];
u0 = 0.4;
validateFcns(nlobj, x0, u0, [], {Ts});
Model.StateFcn is OK.
Model.OutputFcn is OK.
Analysis of user-provided model, cost, and constraint functions complete.

Create a nonlinear MPC controller with three states, one output, and four inputs. The first two inputs are measured disturbances, the third input is the manipulated variable, and the fourth input is an unmeasured disturbance.

nlobj = nlmpc(3,1,'MV',3,'MD',[1 2],'UD',4);

To view the controller state, output, and input dimensions and indices, use the Dimensions property of the controller.

nlobj.Dimensions
ans = struct with fields:
     NumberOfStates: 3
    NumberOfOutputs: 1
     NumberOfInputs: 4
            MVIndex: 3
            MDIndex: [1 2]
            UDIndex: 4

Specify the controller sample time and horizons.

nlobj.Ts = 0.5;
nlobj.PredictionHorizon = 6;
nlobj.ControlHorizon = 3;

Specify the prediction model state function, which is in the file exocstrStateFcnCT.m.

nlobj.Model.StateFcn = 'exocstrStateFcnCT';

Specify the prediction model output function, which is in the file exocstrOutputFcn.m.

nlobj.Model.OutputFcn = 'exocstrOutputFcn';

Validate the prediction model functions using the initial operating point as the nominal condition for testing and setting the unmeasured disturbance state, x0(3), to 0. Since the model has measured disturbances, you must pass them to validateFcns.

x0 = [311.2639; 8.5698; 0];
u0 = [10; 298.15; 298.15];
validateFcns(nlobj,x0,u0(3),u0(1:2)');
Model.StateFcn is OK.
Model.OutputFcn is OK.
Analysis of user-provided model, cost, and constraint functions complete.

Create nonlinear MPC controller with six states, six outputs, and four inputs.

nx = 6;
ny = 6;
nu = 4;
nlobj = nlmpc(nx,ny,nu);
Zero weights are applied to one or more OVs because there are fewer MVs than OVs.

Specify the controller sample time and horizons.

Ts = 0.4;
p = 30;
c = 4;
nlobj.Ts = Ts;
nlobj.PredictionHorizon = p;
nlobj.ControlHorizon = c;

Specify the prediction model state function and the Jacobian of the state function. For this example, use a model of a flying robot.

nlobj.Model.StateFcn = "FlyingRobotStateFcn";
nlobj.Jacobian.StateFcn = "FlyingRobotStateJacobianFcn";

Specify a custom cost function for the controller that replaces the standard cost function.

nlobj.Optimization.CustomCostFcn = @(X,U,e,data) Ts*sum(sum(U(1:p,:)));
nlobj.Optimization.ReplaceStandardCost = true;

Specify a custom constraint function for the controller.

nlobj.Optimization.CustomEqConFcn = @(X,U,data) X(end,:)';

Validate the prediction model and custom functions at the initial states (x0) and initial inputs (u0) of the robot.

x0 = [-10;-10;pi/2;0;0;0];
u0 = zeros(nu,1); 
validateFcns(nlobj,x0,u0);
Model.StateFcn is OK.
Jacobian.StateFcn is OK.
No output function specified. Assuming "y = x" in the prediction model.
Optimization.CustomCostFcn is OK.
Optimization.CustomEqConFcn is OK.
Analysis of user-provided model, cost, and constraint functions complete.

Create a nonlinear MPC controller with four states, one output variable, one manipulated variable, and one measured disturbance.

nlobj = nlmpc(4,1,'MV',1,'MD',2);

Specify the controller sample time and horizons.

nlobj.PredictionHorizon = 10;
nlobj.ControlHorizon = 3;

Specify the state function of the prediction model.

nlobj.Model.StateFcn = 'oxidationStateFcn';

Specify the prediction model output function and the output variable scale factor.

nlobj.Model.OutputFcn = @(x,u) x(3);
nlobj.OutputVariables.ScaleFactor = 0.03;

Specify the manipulated variable constraints and scale factor.

nlobj.ManipulatedVariables.Min = 0.0704;
nlobj.ManipulatedVariables.Max = 0.7042;
nlobj.ManipulatedVariables.ScaleFactor = 0.6;

Specify the measured disturbance scale factor.

nlobj.MeasuredDisturbances.ScaleFactor = 0.5;

Compute the state and input operating conditions for three linear MPC controllers using the fsolve function.

options = optimoptions('fsolve','Display','none');

uLow = [0.38 0.5];
xLow = fsolve(@(x) oxidationStateFcn(x,uLow),[1 0.3 0.03 1],options);

uMedium = [0.24 0.5];
xMedium = fsolve(@(x) oxidationStateFcn(x,uMedium),[1 0.3 0.03 1],options);

uHigh = [0.15 0.5];
xHigh = fsolve(@(x) oxidationStateFcn(x,uHigh),[1 0.3 0.03 1],options);

Create linear MPC controllers for each of these nominal conditions.

mpcobjLow = convertToMPC(nlobj,xLow,uLow);
mpcobjMedium = convertToMPC(nlobj,xMedium,uMedium);
mpcobjHigh = convertToMPC(nlobj,xHigh,uHigh);

You can also create multiple controllers using arrays of nominal conditions. The number of rows in the arrays specifies the number controllers to create. The linear controllers are returned as cell array of mpc objects.

u = [uLow; uMedium; uHigh];
x = [xLow; xMedium; xHigh];
mpcobjs = convertToMPC(nlobj,x,u);

View the properties of the mpcobjLow controller.

mpcobjLow
 
MPC object (created on 13-Feb-2024 00:26:46):
---------------------------------------------
Sampling time:      1 (seconds)
Prediction Horizon: 10
Control Horizon:    3

Plant Model:        
                                      --------------
      1  manipulated variable(s)   -->|  4 states  |
                                      |            |-->  1 measured output(s)
      1  measured disturbance(s)   -->|  2 inputs  |
                                      |            |-->  0 unmeasured output(s)
      0  unmeasured disturbance(s) -->|  1 outputs |
                                      --------------
Indices:
  (input vector)    Manipulated variables: [1 ]
                    Measured disturbances: [2 ]
  (output vector)        Measured outputs: [1 ]

Disturbance and Noise Models:
        Output disturbance model: default (type "getoutdist(mpcobjLow)" for details)
         Measurement noise model: default (unity gain after scaling)

Weights:
        ManipulatedVariables: 0
    ManipulatedVariablesRate: 0.1000
             OutputVariables: 1
                         ECR: 100000

State Estimation:  Default Kalman Filter (type "getEstimator(mpcobjLow)" for details)

Constraints:
 0.0704 <= u1 <= 0.7042, u1/rate is unconstrained, y1 is unconstrained

Use built-in "active-set" QP solver with MaxIterations of 120.

Create a nonlinear MPC controller with six states, six outputs, and four inputs.

nx = 6;
ny = 6;
nu = 4;
nlobj = nlmpc(nx,ny,nu);
Zero weights are applied to one or more OVs because there are fewer MVs than OVs.

Specify the controller sample time and horizons.

Ts = 0.4;
p = 30;
c = 4;
nlobj.Ts = Ts;
nlobj.PredictionHorizon = p;
nlobj.ControlHorizon = c;

Specify the prediction model state function and the Jacobian of the state function. For this example, use a model of a flying robot.

nlobj.Model.StateFcn = "FlyingRobotStateFcn";
nlobj.Jacobian.StateFcn = "FlyingRobotStateJacobianFcn";

Specify a custom cost function for the controller that replaces the standard cost function.

nlobj.Optimization.CustomCostFcn = @(X,U,e,data) Ts*sum(sum(U(1:p,:)));
nlobj.Optimization.ReplaceStandardCost = true;

Specify a custom constraint function for the controller.

nlobj.Optimization.CustomEqConFcn = @(X,U,data) X(end,:)';

Specify linear constraints on the manipulated variables.

for ct = 1:nu
    nlobj.MV(ct).Min = 0;
    nlobj.MV(ct).Max = 1;
end

Validate the prediction model and custom functions at the initial states (x0) and initial inputs (u0) of the robot.

x0 = [-10;-10;pi/2;0;0;0];
u0 = zeros(nu,1); 
validateFcns(nlobj,x0,u0);
Model.StateFcn is OK.
Jacobian.StateFcn is OK.
No output function specified. Assuming "y = x" in the prediction model.
Optimization.CustomCostFcn is OK.
Optimization.CustomEqConFcn is OK.
Analysis of user-provided model, cost, and constraint functions complete.

Compute the optimal state and manipulated variable trajectories, which are returned in the info.

[~,~,info] = nlmpcmove(nlobj,x0,u0);
Slack variable unused or zero-weighted in your custom cost function.
All constraints will be hard.

Plot the optimal trajectories.

FlyingRobotPlotPlanning(info,Ts)
Optimal fuel consumption =   1.884953

Create a nonlinear MPC controller with four states, two outputs, and one input.

nlobj = nlmpc(4,2,1);
Zero weights are applied to one or more OVs because there are fewer MVs than OVs.

Specify the sample time and horizons of the controller.

Ts = 0.1;
nlobj.Ts = Ts;
nlobj.PredictionHorizon = 10;
nlobj.ControlHorizon = 5;

Specify the state function for the controller, which is in the file pendulumDT0.m. This discrete-time model integrates the continuous time model defined in pendulumCT0.m using a multistep forward Euler method.

nlobj.Model.StateFcn = "pendulumDT0";
nlobj.Model.IsContinuousTime = false;

The prediction model uses an optional parameter, Ts, to represent the sample time. Specify the number of parameters.

nlobj.Model.NumberOfParameters = 1;

Specify the output function of the model, passing the sample time parameter as an input argument.

nlobj.Model.OutputFcn = @(x,u,Ts) [x(1); x(3)];

Define standard constraints for the controller.

nlobj.Weights.OutputVariables = [3 3];
nlobj.Weights.ManipulatedVariablesRate = 0.1;
nlobj.OV(1).Min = -10;
nlobj.OV(1).Max = 10;
nlobj.MV.Min = -100;
nlobj.MV.Max = 100;

Validate the prediction model functions.

x0 = [0.1;0.2;-pi/2;0.3];
u0 = 0.4;
validateFcns(nlobj, x0, u0, [], {Ts});
Model.StateFcn is OK.
Model.OutputFcn is OK.
Analysis of user-provided model, cost, and constraint functions complete.

Only two of the plant states are measurable. Therefore, create an extended Kalman filter for estimating the four plant states. Its state transition function is defined in pendulumStateFcn.m and its measurement function is defined in pendulumMeasurementFcn.m.

EKF = extendedKalmanFilter(@pendulumStateFcn,@pendulumMeasurementFcn);

Define initial conditions for the simulation, initialize the extended Kalman filter state, and specify a zero initial manipulated variable value.

x = [0;0;-pi;0];
y = [x(1);x(3)];
EKF.State = x;
mv = 0;

Specify the output reference value.

yref = [0 0];

Create an nlmpcmoveopt object, and specify the sample time parameter.

nloptions = nlmpcmoveopt;
nloptions.Parameters = {Ts};

Run the simulation for 10 seconds. During each control interval:

  1. Correct the previous prediction using the current measurement.

  2. Compute optimal control moves using nlmpcmove. This function returns the computed optimal sequences in nloptions. Passing the updated options object to nlmpcmove in the next control interval provides initial guesses for the optimal sequences.

  3. Predict the model states.

  4. Apply the first computed optimal control move to the plant, updating the plant states.

  5. Generate sensor data with white noise.

  6. Save the plant states.

Duration = 10;
xHistory = x;
for ct = 1:(Duration/Ts)
    % Correct previous prediction
    xk = correct(EKF,y);
    % Compute optimal control moves
    [mv,nloptions] = nlmpcmove(nlobj,xk,mv,yref,[],nloptions);
    % Predict prediction model states for the next iteration
    predict(EKF,[mv; Ts]);
    % Implement first optimal control move
    x = pendulumDT0(x,mv,Ts);
    % Generate sensor data
    y = x([1 3]) + randn(2,1)*0.01;
    % Save plant states
    xHistory = [xHistory x];
end

Plot the resulting state trajectories.

figure
subplot(2,2,1)
plot(0:Ts:Duration,xHistory(1,:))
xlabel('time')
ylabel('z')
title('cart position')
subplot(2,2,2)
plot(0:Ts:Duration,xHistory(2,:))
xlabel('time')
ylabel('zdot')
title('cart velocity')
subplot(2,2,3)
plot(0:Ts:Duration,xHistory(3,:))
xlabel('time')
ylabel('theta')
title('pendulum angle')
subplot(2,2,4)
plot(0:Ts:Duration,xHistory(4,:))
xlabel('time')
ylabel('thetadot')
title('pendulum velocity')

Version History

Introduced in R2018b