Explanations Science

A short explanation / tutorial of the Kalman filter

In my list of methods to explain I have now come to the Kalman filter. This is a short summary of the tutorial of Welch and Bishop which can be found here. I have been reading up on pose recognition and a lot of people seem to be using it to track parts moving across the screen, so I needed to learn a tiny bit about it. The basic idea of the Kalman filter is that $$x$$ depends on its previous value, $$x_{t-1}$$ plus some control input from the previous step, $$u_{t-1}$$ and some noise from the previous step, $$w_{t-1}$$ in a linear way. This can be written as:

$$x_t = A x_{t-1} + B u_{t-1} + w_{t-1}$$

where $$A$$ and $$B$$ describe the dependencies / differences between the new state and the previous state and control input. We also have some measurements of the $$x$$ variable which we call $$z$$. $$z$$ is a linear function of $$x$$ plus some noise and we write it as

$$z_{t} = H x_{t} + v_{t}$$

The two noise terms, the process noise and measure noise is assumed to be normally distributed:

$$w\sim N(0,Q)$$
$$v\sim N(0,R)$$

Having determined our variables, and what kind of assumptions that are needed, we need to understand how our measurement, $$z$$ can help our estimate of $$x$$.

We start out by defining the á priori and posteriori state error. If we let $$\hat{x_{t}^{-}}$$ be the á priori estimate(before the measurement) and $$\hat{x_{t}}$$ be the posteriori estimate(after the measurement) we can write the error in the estimates as follows

$$e^{-}_{t} = x_{t}-\hat{x_{t}^{-}}$$
$$e_{t} = x_{t}-\hat{x_{t}}$$

The error estimate covariance is now simply

$$P_{t}^{-} = E[e_{t}^{-}e_{t}^{-T}]$$
$$P_{t} = E[e_{t}e_{t}^{T}]$$

We can let our final estimate be a linear combination of the two, the á priori and the posteriori estimate. By doing that we have incorporated the extra knowledge gained by having access to $$z$$. Our estimate or update equation is now

$$\hat{x}_t = \hat{x_{t}^{-}} + K (z_{t}-H\hat{x_{t}^{-}} )$$

The subtraction term is apparently called innovation or residual and basically it reflects our belief of the discrepancy between the á priori estimate and the measurement. $$K$$ here is the gain or blending factor which influences how much influence each part has. The question is how do we choose $$K$$? The idea here is that we want to choose the $$K$$ that minimizes the covariance of the posteriori estimate, by doing so we get the most reliable range estimate. To minimize the covariance we plug the linear combination into the covariance estimate, calculate the expectations and then take the derivative of the trace w.r.t $$K$$ and setting to zero and then solving. Doing this we can get an expression for $$K$$ that looks like this

$$K_t = P_{t}^{-} H^{T} ( H P_{t}^{-} H^{T} + R )^{-1}$$

By taking the limit of $$R$$, the measurement noise covariance, when it goes to zero we get $$K = H^{-1}$$, this means that gain weighs the residual more heavily since our measurements practically are noise free. So $$R$$ says something about our belief about how good our measurements are. Now if the á priori estimate of the covariance $$P^{-}_{t}$$ goes to zero then $$K$$ also moves towards zero meaning that our á priori estimate is pretty much perfect.

That being said we can move on to the structure of the Kalman algorithm

Á priori estimates
$$\hat{x_t^{-}} = A \hat{x_{t-1}} + B u_{t-1} + w_{t-1}$$
$$P^{-}_{t} = A P_{t-1} A^{T} + Q$$

Posteriori estimates
$$K_t = P_{t}^{-} H^{T} ( H P_{t}^{-} H^{T} + R )^{-1}$$
$$\hat{x}_t = \hat{x_{t}^{-}} + K (z_{t}-H\hat{x_{t}^{-}} )$$
$$P_{t} = (I – K_{t} H )P^{-}_{t}$$

Here is a simple implementation of a Kalman filter: kalmansim

So to summarize the Kalman filter: the basic idea is that the current state is linearly dependent on the previous state plus some control sequence and that there is a correlation of the new state with a measurement we procured of that state. The estimate of the new state can be written as a linear combination of a á priori estimate and posteriori estimate of the state. That is basically all the assumptions needed to derive the equations above it seems. Thanks Kalman.

Leave a Reply

Your email address will not be published. Required fields are marked *