Compute H-infinity optimal controller

`[`

computes a stabilizing `K`

,`CL`

,`gamma`

] = hinfsyn(`P`

,`nmeas`

,`ncont`

)*H*_{∞}-optimal controller
`K`

for the plant `P`

. The plant has a partitioned
form

$$\left[\begin{array}{c}z\\ y\end{array}\right]=\left[\begin{array}{cc}{P}_{11}& {P}_{12}\\ {P}_{21}& {P}_{22}\end{array}\right]\left[\begin{array}{c}w\\ u\end{array}\right],$$

where:

*w*represents the disturbance inputs.*u*represents the control inputs.*z*represents the error outputs to be kept small.*y*represents the measurement outputs provided to the controller.

`nmeas`

and `ncont`

are the number of signals in
*y* and *u*, respectively. *y* and
*u* are the last outputs and inputs of `P`

,
respectively. `hinfsyn`

returns a controller `K`

that
stabilizes `P`

and has the same number of states. The closed-loop system
`CL`

` = lft(P,K)`

achieves the performance level
`gamma`

, which is the *H*_{∞} norm
of `CL`

(see `hinfnorm`

).

`[`

calculates a controller for the target performance level `K`

,`CL`

,`gamma`

] = hinfsyn(`P`

,`nmeas`

,`ncont`

,`gamTry`

)`gamTry`

.
Specifying `gamTry`

can be useful when the optimal controller performance
is better than you need for your application. In that case, a less-than-optimal controller
can have smaller gains and be more numerically well-conditioned. If
`gamTry`

is not achievable, `hinfsyn`

returns
`[]`

for `K`

and `CL`

, and
`Inf`

for `gamma`

.

`[`

searches the range `K`

,`CL`

,`gamma`

] = hinfsyn(`P`

,`nmeas`

,`ncont`

,`gamRange`

)`gamRange`

for the best achievable performance.
Specify the range with a vector of the form `[gmin,gmax]`

. Limiting the
search range can speed up computation by reducing the number of iterations performed by
`hinfsyn`

to test different performance levels.

`[`

specifies additional computation options. To create `K`

,`CL`

,`gamma`

] = hinfsyn(___,`opts`

)`opts`

, use `hinfsynOptions`

.
Specify `opts`

after all other input arguments.

`hinfsyn`

gives you state-feedback gains and observer gains that you can use to express the controller in observer form. The observer form of the controller`K`

is:$$\begin{array}{c}d{x}_{e}=A{x}_{e}+{B}_{1}{w}_{e}+{B}_{2}u+{L}_{x}e\\ u={K}_{u}{x}_{e}+{L}_{u}e\\ {w}_{e}={K}_{w}{x}_{e}.\end{array}$$

Here,

*w*is an estimate of the worst-case perturbation and the innovation term_{e}*e*is given by:$$e=y-{C}_{2}{x}_{e}-{D}_{21}{w}_{e}-{D}_{22}u.$$

`hinfsyn`

returns the state-feedback gains*K*and_{u}*K*and the observer gains_{w}*L*and_{x}*L*as fields in the_{u}`info`

output argument.You can use this form of the controller for gain scheduling in Simulink

^{®}. To do so, tabulate the plant matrices and the controller gain matrices as a function of the scheduling variables using the Matrix Interpolation (Simulink) block. Then, use the observer form of the controller to update the controller variables as the scheduling variables change.

By default, `hinfsyn`

uses the two-Riccati formulae
([1],[2]) with loop shifting
[3]. You can use
`hinfsynOptions`

to change to an LMI-based method
([4],[5],[6]). You can also
specify a maximum-entropy method. In that method, `hinfsyn`

returns the
*H*_{∞} controller that maximizes an entropy
integral relating to the point `S0`

. For continuous-time systems, this
integral is:

$$\text{Entropy=}\frac{{\gamma}^{2}}{2\pi}{\displaystyle {\int}_{-\infty}^{\infty}\mathrm{ln}|\mathrm{det}I-{\gamma}^{-2}{T}_{{y}_{1}{u}_{1}}(j\omega {)}^{\prime}{T}_{{y}_{1}{u}_{1}}(j\omega )|}\left[\frac{{s}_{o}{}^{2}}{{s}_{0}{}^{2}+{\omega}^{2}}\right]d\omega $$

where $${T}_{{y}_{1}{u}_{1}}$$ is the closed-loop transfer function `CL`

. A similar
integral is used for discrete-time systems.

For all methods, the function uses a standard *γ*-iteration technique to
determine the optimal value of the performance level *γ*.
*γ*-iteration is a *bisection algorithm* that starts
with high and low estimates of *γ* and iterates on *γ*
values to approach the optimal *H*_{∞} control
design.

At each value of *γ*, the algorithm tests a *γ* value to
determine whether a solution exists. In the Riccati-based method, the algorithm computes the
smallest performance level for which the stabilizing Riccati solutions *X* =
*X*_{∞}/*γ* and *Y* =
*Y*_{∞}/*γ* exist. For any *γ* greater than that performance level and
in the range `gamRange`

, the algorithm evaluates the central controller
formulas (*K* formulas) and checks the closed-loop stability of ```
CL =
lft(P,K)
```

. This step is equivalent to verifying the conditions:

`min(eig(X)) ≥ 0`

`min(eig(Y)) ≥ 0`

`rho(XY)`

< 1, where the spectral radius`rho(XY) = max(abs(eig(XY)))`

A *γ* that meets these conditions *passes*. The
stopping criterion for the bisection algorithm requires the relative difference between the
last *γ* value that failed and the last *γ* value that
passed be less than 0.01. (You can change this criterion using
`hinfsynOptions`

.) `hinfsyn`

returns the controller
corresponding to the smallest tested *γ* value that passes. For discrete-time
controllers, the algorithm performs additional computations to construct the feedthrough
matrix *D _{K}*.

Use the `Display`

option of `hinfsynOptions`

to make
`hinfsyn`

display values showing which of the conditions are satisfied
for each *γ* value tested.

The algorithm works best when the following conditions are satisfied by the plant:

*D*_{12}and*D*_{21}have full rank.$$\left[\begin{array}{cc}A-j\omega I& {B}_{2}\\ {C}_{1}& {D}_{12}\end{array}\right]$$ has full column rank for all

*ω*∊*R*.$$\left[\begin{array}{cc}A-j\omega I& {B}_{1}\\ {C}_{2}& {D}_{21}\end{array}\right]$$ has full row rank for all

*ω*∊*R*.

When these rank conditions do not hold, the controller may have undesirable properties. If
*D*_{12} and
*D*_{21} are not full rank, then the
*H*_{∞} controller `K`

might
have large high-frequency gain. If either of the latter two rank conditions does not hold at
some frequency *ω*, the controller might have very lightly damped poles near
that frequency.

[1] Glover, K., and J.C. Doyle. "State-space formulae for all
stabilizing controllers that satisfy an H_{∞} norm bound and
relations to risk sensitivity." *Systems & Control Letters*, Vol. 11,
Number 8, 1988, pp. 167–172.

[2] Doyle, J.C., K. Glover, P. Khargonekar, and B. Francis.
"State-space solutions to standard H_{2} and
H_{∞} control problems." *IEEE Transactions on
Automatic Control*, Vol 34, Number 8, August 1989, pp. 831–847.

[3] Safonov, M.G., D.J.N. Limebeer, and R.Y. Chiang. "Simplifying the
H_{∞} Theory via Loop Shifting, Matrix Pencil and Descriptor
Concepts." *Int. J. Contr.*, Vol. 50, Number 6, 1989, pp. 2467-2488.

[4] Packard, A., K. Zhou, P. Pandey, J. Leonhardson, and G. Balas.
"Optimal, constant I/O similarity scaling for full-information and state-feedback problems."
*Systems & Control Letters*, Vol. 19, Number 4, 1992, pp. 271–280.

[5] Gahinet, P., and P. Apkarian. "A linear matrix inequality approach
to H_{∞}-control." *Int. J. Robust and Nonlinear
Control*, Vol. 4, Number. 4, 1994, pp. 421–448.

[6] Iwasaki, T., and R.E. Skelton. "All controllers for the general
H_{∞}-control problem: LMI existence conditions and state space
formulas." *Automatica*, Vol. 30, Number 8, 1994, pp. 1307–1317.