Main Content

# fixed.forgettingFactor

Compute forgetting factor required for streaming input data

## Syntax

``alpha = fixed.forgettingFactor(m)``

## Description

example

````alpha = fixed.forgettingFactor(m)` returns the forgetting factor ɑ for an infinite number of rows with the equivalent gain of a matrix A with m rows.```

## Examples

collapse all

This example shows how to use the `fixed.forgettingFactor` and `fixed.forgettingFactorInverse` functions.

The growth in the QR decomposition can be seen by looking at the magnitude of the first element $R\left(1,1\right)$of the upper-triangular factor $R$, which is equal to the Euclidean norm of the first column of matrix $A$,

$|R\left(1,1\right)|=||A\left(:,1\right)|{|}_{2}$ .

To see this, create matrix $A$ as a column of ones of length $n$ and compute $R$ of the economy-size QR decomposition.

```n = 1e4; A = ones(n,1);```

Then $|R\left(1,1\right)|=||A\left(:,1\right)|{|}_{2}=\sqrt{\sum _{i=1}^{n}{1}^{2}}=\sqrt{n}$.

`R = fixed.qlessQR(A)`
```R = 100.0000 ```
`norm(A)`
```ans = 100 ```
`sqrt(n)`
```ans = 100 ```

The diagonal elements of the upper-triangular factor $R$ of the QR decomposition may be positive, negative, or zero, but `fixed.qlessQR` and `fixed.qrAB` always return the diagonal elements of $R$ as non-negative.

In a real-time application, such as when data is streaming continuously from a radar array, you can update the QR decomposition with an exponential forgetting factor $\alpha$ where $0<\alpha <1$. Use the `fixed.forgettingFactor` function to compute a forgetting factor $\alpha$ that acts as if the matrix were being integrated over $m$ rows to maintain a gain of about $\sqrt{m}$. The relationship between $\alpha$ and $m$ is $\alpha ={e}^{-1/\left(2m\right)}$.

```m = 16; alpha = fixed.forgettingFactor(m); R_alpha = fixed.qlessQR(A,alpha)```
```R_alpha = 3.9377 ```
`sqrt(m)`
```ans = 4 ```

If you are working with a system and have been given a forgetting factor $\alpha$, and want to know the effective number of rows $m$ that you are integrating over, then you can use the `fixed.forgettingFactorInverse` function. The relationship between $m$ and $\alpha$ is $m=\frac{-1}{2\mathrm{log}\left(\alpha \right)}$.

`fixed.forgettingFactorInverse(alpha)`
```ans = 16 ```

## Input Arguments

collapse all

Number of rows in matrix A, specified as a positive integer-valued scalar.

Data Types: `double`

## Output Arguments

collapse all

Forgetting factor, returned as a scalar.

## Tips

Use `fixed.forgettingFactor` to compute a forgetting factor for these functions and blocks.

## Algorithms

In real-time applications, such as when data is streaming continuously from a radar array [1], the QR decomposition is often computed continuously as each new row of data arrives. In these systems, the previously computed upper-triangular matrix, R, is updated and weighted by forgetting factor ɑ, where 0 < ɑ < 1. This computation treats the matrix A as if it is infinitely tall. The series of transformations is as follows.

`$\begin{array}{l}{R}_{0}=\mathrm{zeros}\left(n,n\right)\\ \left[\begin{array}{c}{R}_{0}\\ A\left(1,:\right)\end{array}\right]\to \left[\begin{array}{c}{R}_{1}\\ 0\end{array}\right]\\ \left[\begin{array}{c}\alpha {R}_{1}\\ A\left(2,:\right)\end{array}\right]\to \left[\begin{array}{c}{R}_{2}\\ 0\end{array}\right]\\ ⋮\\ \left[\begin{array}{c}\alpha {R}_{k}\\ A\left(k,:\right)\end{array}\right]\to \left[\begin{array}{c}{R}_{k+1}\\ 0\end{array}\right]\end{array}$`

Without the forgetting factor ɑ, the values of R would grow without bound.

With the forgetting factor, the gain in R is

`$g=\sqrt{\frac{1}{2}{\int }_{0}^{\infty }{\alpha }^{x}dx}=\sqrt{\frac{-1}{2\mathrm{log}\left(\alpha \right)}}.$`

The gain of computing R without a forgetting factor from an m-by-n matrix A is $\sqrt{m}$. Therefore,

`$\begin{array}{l}\sqrt{m}=\sqrt{\frac{-1}{2\mathrm{log}\left(\alpha \right)}}\\ m=\frac{-1}{2\mathrm{log}\left(\alpha \right)}\\ \alpha ={e}^{-1/\left(2m\right)}.\end{array}$`

## References

[1] Rader, C.M. "VLSI Systolic Arrays for Adaptive Nulling." IEEE Signal Processing Magazine (July 1996): 29-49.

## Version History

Introduced in R2021b