## Day 247: The Kalman filter – Part 2

The Kalman filter converges on the optimal state estimate using noisy measurements and a model that we create.

Okay I lied, I think we can do a better job explaining the Kalman filter, more importantly I have a fun little demo I can share with everyone. It’s not mine, but I like it a lot and it will give you a feel for what the kalman filter does. So let’s get started!

At the basic level we have a plant model. That is just a fancy way of saying the behavior we are interested in. For example from yesterday, if we observed the behavior of our lightswitch directly, the equation that explained that behavior would be our plant model. Now you may think why not just observe the lightswitch directly? In this case we can, but in a lot of other situations we cannot! For example if we want to build a prosthetic that has a hand that opens and closes based on brain activity, we can’t tell what the brain is thinking or doing directly, we need to measure and predict based on our observations of what it is doing.

Going back to our example yesterday, the prosthetic hand being opened or closed would be the hidden state or from our previous example the light switch on or off. We wouldn’t be able to record the prosthetic opening or closing though, if we knew that we would be done! Instead, we tell the person to imagine or try as best as they can to try and make a hand open/close movement and record our brain activity.

A kalman filter works when we are trying to determine our hidden state, I mean that is basically the whole purpose of a kalman filter. So quick recap before we move forward.

1. We take a measurement that has information about our hidden state, IE ambient light or EEG from our two examples we’ve covered.
2. We build our model, this is actually fairly straightforward even though we haven’t touched on it yet, but we will cover this.
3. We estimate our hidden states and our unknown variables using something called expectation maximization. Haven’t covered this either yet, but we will.
4. Lastly we use our Kalman filter to predict the state based on our measurement.

You may be wondering what this looks like. Well my lovely readers, I have come across a very good example to help show you the power of the kalman filter. Let me show you a screenshot of what I mean then I’ll link to it.

So what are we looking at? Well the green line is my estimated cursor movement , you’ll see just in just a moment. The dots around my cursor are the “noisy” measurements that are fed to the kalman filter. Estimating cursor position and velocity isn’t too difficult and this is just a toy example to show you how it all works, but basically the algorithm has no idea where my cursor is exactly, it just has those measured dots. Once again, the green line isn’t where my cursor actually is, it is where the kalman filter thinks it is.

Don’t believe me? Here try it out yourself. Best of all, you can adjust the equations for the model below the demo. Play around with it and see what each of them does! Those equations are the ones we estimate using expectation maximization because we don’t typically know the values for them beforehand (we even estimate noise in our measurement, which you can also adjust using the sliders). We can get into that some other time, but notice if you change certain parts of the equation just slightly the model becomes worse and worse, this is why we need to get the best guess possible, so we have a good model. The worse our estimates the worse our model becomes.

Okay, hopefully that does a better job of introducing the Kalman filter than my last post. Or maybe the combination of the two will do the trick. In any case, I think that covers the intro for the Kalman filter, so now we can get into how we estimate our unknowns!

This site uses Akismet to reduce spam. Learn how your comment data is processed.