## Optimization Problem

### Overview

Model predictive control solves an optimization problem – specifically, a quadratic program (QP) – at each control interval. The solution determines the manipulated variables (MVs) to be used in the plant until the next control interval.

This QP problem includes the following features:

• The objective, or "cost", function — A scalar, nonnegative measure of controller performance to be minimized.

• Constraints — Conditions the solution must satisfy, such as physical bounds on MVs and plant output variables.

• Decision — The MV adjustments that minimize the cost function while satisfying the constraints.

The following sections describe these features in more detail.

### Standard Cost Function

The standard cost function is the sum of four terms, each focusing on a particular aspect of controller performance, as follows:

`$J\left({z}_{k}\right)={J}_{y}\left({z}_{k}\right)+{J}_{u}\left({z}_{k}\right)+{J}_{\Delta u}\left({z}_{k}\right)+{J}_{\epsilon }\left({z}_{k}\right).$`

Here, zk is the QP decision. As described below, each term includes weights that help you balance competing objectives. While the MPC controller provides default weights, you will usually need to adjust them to tune the controller for your application.

#### Output Reference Tracking

In most applications, the controller must keep selected plant outputs at or near specified reference values. An MPC controller uses the following scalar performance measure for output reference tracking:

`${J}_{y}\left({z}_{k}\right)=\sum _{j=1}^{{n}_{y}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\sum _{i=1}^{p}{\left\{\frac{{w}_{i,j}^{y}}{{s}_{j}^{y}}\left[{r}_{j}\left(k+i\text{\hspace{0.17em}}\text{|}k\right)-{y}_{j}\left(k+i\text{\hspace{0.17em}}\text{|}k\right)\right]\right\}}^{2}.$`

Here,

• k — Current control interval

• p — Prediction horizon (number of intervals)

• ny — Number of plant output variables

• zk — QP decision variables vector, given by:

`${z}_{k}^{T}=\left[\begin{array}{ccccc}u{\left(k|k\right)}^{T}& u{\left(k+1|k\right)}^{T}& \cdots & u{\left(k+p-1|k\right)}^{T}& {\epsilon }_{k}\end{array}\right].$`
• yj(k+i|k) — Predicted value of the jth plant output at the ith prediction horizon step, in engineering units

• rj(k+i|k) — Reference value for the jth plant output at the ith prediction horizon step, in engineering units

• ${s}_{j}^{y}$ — Scale factor for the jth plant output, in engineering units.

• ${w}_{i,j}^{y}$ — Tuning weight for the jth plant output at the ith prediction horizon step (dimensionless).

• εk — Slack variable at control interval k (dimensionless).

The values ny, p, ${s}_{j}^{y}$, and ${w}_{i,j}^{y}$ are constant controller specifications. The controller receives reference values, rj(k+i|k), for the entire prediction horizon. The controller uses the state observer to predict the plant outputs, yj(k+i|k), which depend on manipulated variable adjustments (zk), measured disturbances (MD), and state estimates. At interval k, the controller state estimates and MD values are available. Therefore, Jy is a function of zk only.

The problem formulation used here, in which the decision variables of the optimization problem are the are just the manipulated variables (plus a slack variable), is called a dense formulation. it is also known as a sequential or single-shooting formulation.

A different formulation is possible in which the decision variables of the optimization problem also include all the intermediate state variables. In this case, equality constraints of the type x(k+1) = Ax(k) + Bu(k) must be enforced. This formulation results in a larger problem with many equality constraints, but a more regular sparse structure, an easier-to-compute cost function, and better numerical conditioning for large horizons. This alternative formulation is referred to as sparse, (but also simultaneous or multiple-shooting) approach.

The default problem formulation used in Model Predictive Control Toolbox™ for linear MPC problems is the dense formulation, because it can have a smaller memory footprint (if a generic QP solver is used) and tends to be more efficient for stable linear problems in which the prediction horizon is small and when a considerable amount of Manipulated Variable Blocking is used.

#### Manipulated Variable Tracking

In some applications, such as when there are more manipulated variables than plant outputs, the controller must keep selected manipulated variables (MVs) at or near specified target values. An MPC controller uses the following scalar performance measure for manipulated variable tracking:

`${J}_{u}\left({z}_{k}\right)=\sum _{j=1}^{{n}_{u}}\sum _{i=0}^{p-1}{\left\{\frac{{w}_{i,j}^{u}}{{s}_{j}^{u}}\left[{u}_{j}\left(k+i\text{\hspace{0.17em}}\text{|}k\right)-{u}_{j,target}\left(k+i\text{\hspace{0.17em}}\text{|}k\right)\right]\right\}}^{2}.$`

Here,

• k — Current control interval

• p — Prediction horizon (number of intervals)

• nu — Number of manipulated variables

• zk — QP decision variables vector, given by:

`${z}_{k}^{T}=\left[\begin{array}{ccccc}u{\left(k|k\right)}^{T}& u{\left(k+1|k\right)}^{T}& \cdots & u{\left(k+p-1|k\right)}^{T}& {\epsilon }_{k}\end{array}\right].$`
• uj,target(k+i|k) — Target value for the jth MV at the ith prediction horizon step, in engineering units

• ${s}_{j}^{u}$ — Scale factor for the jth MV, in engineering units

• ${w}_{i,j}^{u}$ — Tuning weight for the jth MV at the ith prediction horizon step (dimensionless)

• εk — Slack variable at control interval k (dimensionless)

The values nu, p, ${s}_{j}^{u}$, and ${w}_{i,j}^{u}$ are constant controller specifications. The controller receives uj,target(k+i|k) values for the entire horizon. The controller uses the state observer to predict the plant outputs. Thus, Ju is a function of zk only.

#### Manipulated Variable Move Suppression

Most applications prefer small MV adjustments (moves). An MPC constant uses the following scalar performance measure for manipulated variable move suppression:

`${J}_{\Delta u}\left({z}_{k}\right)=\sum _{j=1}^{{n}_{u}}\text{\hspace{0.17em}}\sum _{i=0}^{p-1}{\left\{\frac{{w}_{i,j}^{\Delta u}}{{s}_{j}^{u}}\left[{u}_{j}\left(k+i\text{\hspace{0.17em}}\text{|}k\right)-{u}_{j}\left(k+i-1\text{\hspace{0.17em}}\text{|}k\right)\right]\right\}}^{2}.$`

Here,

• k — Current control interval

• p — Prediction horizon (number of intervals)

• nu — Number of manipulated variables

• zk — QP decision variables vector, given by:

`${z}_{k}^{T}=\left[\begin{array}{ccccc}u{\left(k|k\right)}^{T}& u{\left(k+1|k\right)}^{T}& \cdots & u{\left(k+p-1|k\right)}^{T}& {\epsilon }_{k}\end{array}\right].$`
• ${s}_{j}^{u}$ — Scale factor for the jth MV, in engineering units

• ${w}_{i,j}^{\Delta u}$ — Tuning weight for the jth MV movement at the ith prediction horizon step (dimensionless)

• εk — Slack variable at control interval k (dimensionless)

The values nu, p, ${s}_{j}^{u}$, and ${w}_{i,j}^{\Delta u}$ are constant controller specifications. u(k–1|k) = u(k–1), which are the known MVs from the previous control interval. JΔu is a function of zk only.

In addition, a control horizon m < p (or MV blocking) constrains certain MV moves to be zero.

#### Constraint Violation

In practice, constraint violations might be unavoidable. Soft constraints allow a feasible QP solution under such conditions. An MPC controller employs a dimensionless, nonnegative slack variable, εk, which quantifies the worst-case constraint violation (See Constraints). The corresponding performance measure is:

`${J}_{\epsilon }\left({z}_{k}\right)={\rho }_{\epsilon }{\epsilon }_{k}^{2}.$`

Here,

• zk — QP decision variables vector, given by:

`${z}_{k}^{T}=\left[\begin{array}{ccccc}u{\left(k|k\right)}^{T}& u{\left(k+1|k\right)}^{T}& \cdots & u{\left(k+p-1|k\right)}^{T}& {\epsilon }_{k}\end{array}\right].$`
• εk — Slack variable at the control interval k (dimensionless).

• ρε — Constraint violation penalty weight (dimensionless). This weight corresponds to the `Weights.ECR` property of an `mpc` object.

### Alternative Cost Function

You can elect to use the following alternative to the standard cost function:

`$J\left({z}_{k}\right)=\sum _{i=0}^{p-1}\left\{\left[{e}_{y}^{T}\left(k+i\right)Q{e}_{y}\left(k+i\right)\right]+\left[{e}_{u}^{T}\left(k+i\right){R}_{u}{e}_{u}\left(k+i\right)\right]+\left[\Delta {u}^{T}\left(k+i\right){R}_{\Delta u}\Delta u\left(k+i\right)\right]\right\}+{\rho }_{ϵ}{\epsilon }_{k}^{2}.$`

Here, Q (ny-by-ny), Ru, and RΔu (nu-by-nu) are positive-semi-definite weight matrices, and:

`$\begin{array}{c}{e}_{y}\left(i+k\right)={L}_{Sy}^{-1}\left[r\left(k+i+1\text{|}k\right)-y\left(k+i+1|k\right)\right]\\ {e}_{u}\left(i+k\right)={L}_{Su}^{-1}\left[{u}_{target}\left(k+i\text{|}k\right)-u\left(k+i|k\right)\right]\\ \Delta u\left(k+i\right)={L}_{Su}^{-1}\left[u\left(k+i\text{|}k\right)-u\left(k+i-1|k\right)\right]\end{array}$`

Also,

• LSy — Diagonal matrix of plant output variable scale factors sy, in engineering units

• LSu — Diagonal matrix of MV scale factors su, in engineering units

• r(k+1|k) — ny plant output reference values at the ith prediction horizon step, in engineering units

• y(k+1|k) — ny plant outputs at the ith prediction horizon step, in engineering units

• zk — QP decision variables vector, given by:

`${z}_{k}^{T}=\left[\begin{array}{ccccc}u{\left(k|k\right)}^{T}& u{\left(k+1|k\right)}^{T}& \cdots & u{\left(k+p-1|k\right)}^{T}& {\epsilon }_{k}\end{array}\right].$`
• utarget(k+i|k) — nu MV target values corresponding to u(k+i|k), in engineering units.

Output predictions use the state observer, as in the standard cost function.

The alternative cost function allows off-diagonal weighting, but requires the weights to be identical at each prediction horizon step.

The alternative and standard cost functions are identical if the following conditions hold:

• The standard cost functions employs weights ${w}_{i,j}^{y}$, ${w}_{i,j}^{u}$, and ${w}_{i,j}^{\Delta u}$ that are constant with respect to the index, i = 1:p.

• The matrices Q, Ru, and RΔu are diagonal with the squares of those weights as the diagonal elements.

### Constraints

Certain constraints are implicit. For example, a control horizon m < p (or MV blocking) forces some MV increments to be zero, and the state observer used for plant output prediction is a set of implicit equality constraints. Explicit constraints that you can configure are described below.

#### Bounds on Plant Outputs, MVs, and MV Increments

The most common MPC constraints are bounds, as follows.

Here, the V parameters (ECR values) are dimensionless controller constants analogous to the cost function weights but used for constraint softening, and correspond to the `MinECR`, `MaxECR`, `RateMinECR` and `RateMaxECR` vectors stored in the `OutputVariables` and `ManipulatedVariables` properties of an `mpc` object. For more information, see Specify Constraints.

Also,

• εk — Scalar QP slack variable (dimensionless) used for constraint softening.

• ${s}_{j}^{y}$ — Scale factor for jth plant output, in engineering units.

• ${s}_{j}^{u}$ — Scale factor for jth MV, in engineering units.

• yj,min(i), yj,max(i) — lower and upper bounds for jth plant output at ith prediction horizon step, in engineering units.

• uj,min(i), uj,max(i) — lower and upper bounds for jth MV at ith prediction horizon step, in engineering units.

• Δuj,min(i), Δuj,max(i) — lower and upper bounds for jth MV increment at ith prediction horizon step, in engineering units.

Except for the slack variable non-negativity condition, all of the above constraints are optional and are inactive by default (i.e., initialized with infinite limiting values). To include a bound constraint, you must specify a finite limit when you design the controller.

#### Mixed Input/Output Linear Constraints

You can also introduce mixed input/output linear constraints.

The general form of the mixed input/output constraints is:

Eu(k + j) + Fy(k + j) + Sv(k + j) ≤ G + εV

Here, j = 0,...,p, and:

• p is the prediction horizon.

• k is the current time index.

• u is a column vector manipulated variables.

• y is a column vector of all plant output variables.

• v is a column vector of measured disturbance variables.

• ε is a scalar slack variable used for constraint softening (as in Standard Cost Function).

• E, F, G, V, and S are constant matrices.

### QP Matrices

This section describes the matrices associated with the model predictive control optimization problem described in Optimization Problem. Note that the vector of decision variables used in this section (and in the actual implementation of the software) is:

`${z}_{k}^{T}=\left[\begin{array}{ccccc}\Delta u{\left(k|k\right)}^{T}& \Delta u{\left(k+1|k\right)}^{T}& \cdots & \Delta u{\left(k+p-1|k\right)}^{T}& {\epsilon }_{k}\end{array}\right].$`

where

`$\Delta u\left(k+i\right)={S}_{u}^{-1}\left[u\left(k+i\text{|}k\right)-u\left(k+i-1|k\right)\right].$`

This vector of decision variables is mathematically equivalent to the one described in the previous sections, but, from an implementation perspective, has advantages for formulations that allow for weights and constraints on increments of manipulated variables.

#### Prediction

Assume that the disturbance models described in Input Disturbance Model are unit gains; that is, d(k) = nd(k) is white Gaussian noise. You can denote this problem as

`$x←\left[\begin{array}{c}x\\ {x}_{d}\end{array}\right],\text{\hspace{0.17em}}A←\left[\begin{array}{cc}A& {B}_{d}\overline{C}\\ 0& \overline{A}\end{array}\right],\text{\hspace{0.17em}}{B}_{u}←\left[\begin{array}{c}{B}_{u}\\ 0\end{array}\right],\text{\hspace{0.17em}}{B}_{v}←\left[\begin{array}{c}Bv\\ 0\end{array}\right],\text{\hspace{0.17em}}{B}_{d}←\left[\begin{array}{c}{B}_{d}\overline{D}\\ \overline{B}\end{array}\right],\text{\hspace{0.17em}}C←\left[\begin{array}{cc}C& {D}_{d}\overline{C}\end{array}\right]$`

Then, the prediction model is:

x(k+1) = Ax(k) +Buu(k) +Bvv(k)+Bdnd(k)

y(k) = Cx(k) +Dvv(k) +Ddnd(k)

Next, consider the problem of predicting the future trajectories of the model performed at time k=0. Set nd(i)=0 for all prediction instants i, and obtain

`$y\left(i|0\right)=C\left[{A}^{i}x\left(0\right)+\sum _{h=0}^{i-1}{A}^{i-1}\left({B}_{u}\left(u\left(-1\right)+\sum _{j=0}^{h}\Delta u\left(j\right)\right)+{B}_{v}v\left(h\right)\right)\right]+{D}_{v}v\left(i\right)$`

This equation gives the solution

`$\left[\begin{array}{c}y\left(1\right)\\ \cdots \\ y\left(p\right)\end{array}\right]={S}_{x}x\left(0\right)+{S}_{u1}u\left(-1\right)+{S}_{u}\left[\begin{array}{c}\Delta u\left(0\right)\\ \cdots \\ \Delta u\left(p-1\right)\end{array}\right]+{H}_{v}\left[\begin{array}{c}v\left(0\right)\\ \cdots \\ v\left(p\right)\end{array}\right]$`

where

`$\begin{array}{l}{S}_{x}=\left[\begin{array}{c}CA\\ C{A}^{2}\\ \cdots \\ C{A}^{p}\end{array}\right]\in {\Re }^{p{n}_{y}×{n}_{x}},{S}_{u1}=\left[\begin{array}{c}C{B}_{u}\\ C{B}_{u}+CA{B}_{u}\\ \cdots \\ \sum _{h=0}^{p-1}C{A}^{h}{B}_{u}\end{array}\right]\in {\Re }^{p{n}_{y}×{n}_{u}}\\ {S}_{u}=\left[\begin{array}{cccc}C{B}_{u}& 0& \cdots & 0\\ C{B}_{u}+CA{B}_{u}& C{B}_{u}& \cdots & 0\\ \cdots & \cdots & \cdots & \cdots \\ \sum _{h=0}^{p-1}C{A}^{h}{B}_{u}& \sum _{h=0}^{p-2}C{A}^{h}{B}_{u}& \cdots & C{B}_{u}\end{array}\right]\in {\Re }^{p{n}_{y}×p{n}_{u}}\\ {H}_{v}=\left[\begin{array}{ccccc}C{B}_{v}& {D}_{v}& 0& \cdots & 0\\ CA{B}_{v}& C{B}_{v}& {D}_{v}& \cdots & 0\\ \cdots & \cdots & \cdots & \cdots & \cdots \\ C{A}^{p-1}{B}_{v}& C{A}^{p-2}{B}_{v}& C{A}^{p-3}{B}_{v}& \cdots & {D}_{v}\end{array}\right]\in {\Re }^{p{n}_{y}×\left(p+1\right){n}_{v}}.\end{array}$`

#### Optimization Variables

Let m be the number of free control moves, and let z= [z0; ...; zm–1]. Then,

`$\left[\begin{array}{c}\Delta u\left(0\right)\\ \cdots \\ \Delta u\left(p-1\right)\end{array}\right]={J}_{M}\left[\begin{array}{c}{z}_{0}\\ \cdots \\ {z}_{m-1}\end{array}\right]$`

where JM depends on the choice of blocking moves. Together with the slack variable ɛ, vectors z0, ..., zm–1 constitute the free optimization variables of the optimization problem. In the case of systems with a single manipulated variable, z0, ..., zm–1 are scalars.

Consider the blocking moves depicted in the following graph.

Blocking Moves: Inputs and Input Increments for moves = [2 3 2]

This graph corresponds to the choice `moves=[2 3 2]`, or equivalently, u(0)=u(1), u(2)=u(3)=u(4), u(5)=u(6), Δ u(0)=z0, Δ u(2)=z1, Δ u(5)=z2, Δ u(1)=Δ u(3)=Δ u(4)=Δ u(6)=0.

Then, the corresponding matrix JM is

`${J}_{M}=\left[\begin{array}{ccc}I& 0& 0\\ 0& 0& 0\\ 0& I& 0\\ 0& 0& 0\\ 0& 0& 0\\ 0& 0& I\\ 0& 0& 0\end{array}\right]$`

#### Cost Function

Standard Form.  The function to be optimized is

where

 $\begin{array}{c}{W}_{u}={L}_{Su}^{-1}\text{diag}\left({w}_{0,1}^{u},{w}_{0,2}^{u},...,{w}_{0,{n}_{u}}^{u},...,{w}_{p-1,1}^{u},{w}_{p-1,2}^{u},...,{w}_{p-1,{n}_{u}}^{u}\right)\\ {W}_{\Delta u}={L}_{Su}^{-1}\text{diag}\left({w}_{0,1}^{\Delta u},{w}_{0,2}^{\Delta u},...,{w}_{0,{n}_{u}}^{\Delta u},...,{w}_{p-1,1}^{\Delta u},{w}_{p-1,2}^{\Delta u},...,{w}_{p-1,{n}_{u}}^{\Delta u}\right)\\ {W}_{y}={L}_{Sy}^{-1}\text{diag}\left({w}_{1,1}^{y},{w}_{1,2}^{y},...,{w}_{1,{n}_{y}}^{y},...,{w}_{p,1}^{y},{w}_{p,2}^{y},...,{w}_{p,{n}_{y}}^{y}\right)\end{array}$ (1)

and LSy and LSu are the diagonal matrices or outputs and MV scaling factors, respectively, in engineering units.

Finally, after substituting u(k), Δu(k), y(k), J(z) can be rewritten as

 (2)

where

`$\begin{array}{l}{c}_{y}={S}_{x}x\left(0\right)+{S}_{u1}u\left(-1\right)+{H}_{v}\left[\begin{array}{c}v\left(0\right)\\ \cdots \\ v\left(p\right)\end{array}\right]-\left[\begin{array}{c}r\left(1\right)\\ \cdots \\ r\left(p\right)\end{array}\right]\\ {c}_{u}=\left[\begin{array}{c}{I}_{1}\\ \cdots \\ {I}_{p}\end{array}\right]u\left(-1\right)-\left[\begin{array}{c}{u}_{target}\left(0\right)\\ \cdots \\ {u}_{target}\left(p-1\right)\end{array}\right]\end{array}$`

Here, I1 = … = Ip are identity matrices of size nu.

Note

You may want the QP problem to remain strictly convex. If the condition number of the Hessian matrix KΔU is larger than 1012, add the quantity `10*sqrt(eps)` on each diagonal term. You can use this solution only when all input rates are unpenalized (WΔu=0) (see the `Weights` property of the `mpc` object).

Alternative Cost Function.  If you are using the alternative cost function shown in Alternative Cost Function, then Equation 1 is replaced by the following:

 $\begin{array}{l}{W}_{u}={L}_{Su}^{-1}\text{blkdiag}\left({R}_{u},...,{R}_{u}\right)\\ {W}_{\Delta u}={L}_{Su}^{-1}\text{blkdiag}\left({R}_{\Delta u},...,{R}_{\Delta u}\right)\\ {W}_{y}={L}_{Sy}^{-1}\text{blkdiag}\left(Q,...,Q\right)\end{array}$ (3)

In this case, the block-diagonal matrices repeat p times, for example, once for each step in the prediction horizon.

You also have the option to use a combination of the standard and alternative forms. For more information, see the `Weights` property of the `mpc` object.

#### Constraints

Next, consider the limits on inputs, input increments, and outputs along with the constraint ɛ≥ 0.

`$\left[\begin{array}{c}{y}_{\mathrm{min}}\left(1\right)-\epsilon {V}_{\mathrm{min}}^{y}\left(1\right)\\ \cdots \\ {y}_{\mathrm{min}}\left(p\right)-\epsilon {V}_{\mathrm{min}}^{y}\left(p\right)\\ {u}_{\mathrm{min}}\left(0\right)-\epsilon {V}_{\mathrm{min}}^{u}\left(0\right)\\ \cdots \\ {u}_{\mathrm{min}}\left(p-1\right)-\epsilon {V}_{\mathrm{min}}^{u}\left(p-1\right)\\ \Delta {u}_{\mathrm{min}}\left(0\right)-\epsilon {V}_{\mathrm{min}}^{\Delta u}\left(0\right)\\ \cdots \\ \Delta {u}_{\mathrm{min}}\left(p-1\right)-\epsilon {V}_{\mathrm{min}}^{\Delta u}\left(p-1\right)\end{array}\right]\le \left[\begin{array}{c}y\left(1\right)\\ \cdots \\ y\left(p\right)\\ u\left(0\right)\\ \cdots \\ u\left(p-1\right)\\ \Delta u\left(0\right)\\ \cdots \\ \Delta u\left(p-1\right)\end{array}\right]\le \left[\begin{array}{c}{y}_{\mathrm{max}}\left(1\right)+\epsilon {V}_{\mathrm{max}}^{y}\left(1\right)\\ \cdots \\ {y}_{\mathrm{max}}\left(p\right)+\epsilon {V}_{\mathrm{max}}^{y}\left(p\right)\\ {u}_{\mathrm{max}}\left(0\right)+\epsilon {V}_{\mathrm{max}}^{u}\left(0\right)\\ \cdots \\ {u}_{\mathrm{max}}\left(p-1\right)+\epsilon {V}_{\mathrm{max}}^{u}\left(p-1\right)\\ \Delta {u}_{\mathrm{max}}\left(0\right)+\epsilon {V}_{\mathrm{max}}^{\Delta u}\left(0\right)\\ \cdots \\ \Delta {u}_{\mathrm{max}}\left(p-1\right)+\epsilon {V}_{\mathrm{max}}^{\Delta u}\left(p-1\right)\end{array}\right]$`

Note

To reduce computational effort, the controller automatically eliminates extraneous constraints, such as infinite bounds. Thus, the constraint set used in real time may be much smaller than that suggested in this section.

Similar to what you did for the cost function, you can substitute u(k), Δu(k), y(k), and obtain

 ${M}_{z}z+{M}_{\epsilon }\epsilon \le {M}_{\mathrm{lim}}+{M}_{v}\left[\begin{array}{c}v\left(0\right)\\ \cdots \\ v\left(p\right)\end{array}\right]+{M}_{u}u\left(-1\right)+{M}_{x}x\left(0\right)$ (4)

In this case, matrices Mz, Mɛ, Mlim, Mv, Mu, and Mx are obtained from the upper and lower bounds and ECR values.

### Unconstrained Model Predictive Control

The optimal solution is computed analytically

`$z*=-{K}_{\Delta u}^{-1}{\left({\left[\begin{array}{c}r\left(1\right)\\ \cdots \\ r\left(p\right)\end{array}\right]}^{T}{K}_{r}+\left[\begin{array}{c}v\left(0\right)\\ \cdots \\ v\left(p\right)\end{array}\right]{K}_{v}+u{\left(-1\right)}^{T}{K}_{u}+{\left[\begin{array}{c}{u}_{target}\left(0\right)\\ \cdots \\ {u}_{target}\left(p-1\right)\end{array}\right]}^{T}{K}_{ut}+x{\left(0\right)}^{T}{K}_{x}\right)}^{T}$`

and the model predictive controller sets Δu(k)=z*0, u(k)=u(k–1)+Δu(k).