Java PageBox 
 New PageBox 
 Other products 

Kalman filters

Kalman filters

A Kalman filter aims to estimate a set of variables using a set of measurements. The estimated variable evolves according to a linear equation and the measurement is also related to the estimated variable by a linear equation. Kalman filters were used in satellite guidance systems and weapons like the Peacemaker missile. They are relatively easy to understand and implement.


To simplify the explanation we describe a one dimension Kalman filter. We don’t assume any mathematical skill.

We want to estimate a random variable x(t) that satisfies a linear dynamic equation

x(tk+1) = Fx(tk) + u(k)

Linear: F is a number we know.

Dynamic: x depends on the time t.

u is white noise, which means that it is not correlated to any other random variable. White noise is common in hardware. A well-known example is the noise voltage produced in a resistor due to thermal agitation. When there are many uncorrelated inputs the sum of them is a white noise.

The variance of u is Q. Variance is usually computed with this formula:

Q = (Σk = 0n (u(k) - m)2) / k where m is 0, which gives Q = (Σk = 0n u(k)2) / k.

Kalman uses this formula:

Q = E[u2] where E is the estimated value.

If the probability distribution of a random variable X is:

Value of X










Then its expected value E[X] = (i=1k pi xi

A Kalman filter needs an initial estimate xe of x (x(t0)) to get started.

The variance of the error in this estimate is P = E[(x(t0) –xe)2].

x(t1) = Fx(t0) + u(0) according to the dynamic linear equation.

We can estimate x(t1) with: newxe = Fx(t0) + u(0) = Fxe + ue

The mean value of u is 0, therefore our estimate of u(0), ue equals 0.

So our estimate of x(t1) newxe = Fxe

The variance of our estimate is:

newP = E[(x(t1) – newxe)2] = E[(Fx(t0) + u – Fxe)2] =

F2E[(x(t0) –xe)2] + E[u2] + 2FE[(x(t0) – xe) * u]

u not being correlated with x(t0) and xe, E[(x(t0) – xe) * u] = 0 and we can write:

newP = PF2 + Q

We make a measurement of x called y

y(1) = Mx(t1) + w(1)

M is a number we know.

w is a white noise whose variance is R and mean value is 0.

We can estimate y(1): ye = M*newxe

We can improve our estimate of x(t1) with the formula:

newerxe = newxe + K * (y(1) – M*newxe) = newxe + K * (y(1) – ye)

where K is the Kalman gain.

To get the optimal K value we compute the variance of the resulting error:

newerP = E[(x(t1) - newerxe)2] = E[x - newxe - K * (y - M*newxe)]2 =

E[x - newxe - K * (Mx + w - M*newxe)]2 = E[(1- KM)(x – newxe) + Kw]2 =

newP * (1 - KM)2 + RK2 = newP * (1 - 2KM + K2M2) + RK2

newerP is minimal when the derivative of newerP with respect to K equals 0, so with

K = M newP / (newP * M2 + R)

Because we found that newP = PF2 + Q we can write

K = M * (PF2 + Q) / ((PF2 + Q) * M2 + R) that only contains know values.

We iterate the process for t2, t3 ... each time the variance of our estimation error decreases. Our estimate becomes more and more accurate.

Additional reading

Presentation of Dan Simon. Interesting comments of the limitations of Kalman filters.

The original paper of Kalman. Not bad at all even without a math background.

The first chapter of the “Stochastic Models, Estimation, and Control” book of Peter Maybeck. Greg Welch says that it “provides a wonderful, very simple and yet revealing introduction to some of the concepts of Kalman filtering.” Well, it depends on the reader background.

An interesting document of Peter Joseph. The presentation above is based on this document. So why not just providing a link? Peter Joseph uses a mathematical notation with implicit simplifications. Our experience is that programmers (probably infected by compilers) have much stricter grammars. Hence this document.

Bayes++. Bayesian filtering classes in Open Source (MIT license). Include Kalman filtering. Uses Boost.

Cuckoo Ptah Algorithms Graphs Kalman Air Transport Society Networks Device
2001-2004 Alexis Grandemange   Last modified