Day 41: Connecting the Concepts
Maybe we shouldn’t phrase it this way, since there is still quite a few days left of 365DoA, but you made it to the end! No, not THE end, but if you’ve been following along the past few posts we’ve introduced several seemingly disparate concepts and said, “don’t worry they are related,” without telling you how. Well today, like a magician showing you how to pull a rabbit from a hat, let’s connect the dots and explain why we introduced all those concepts!*
Okay, we started a several posts back talking about probability density functions, then cumulative distribution functions. We told you those were related, the CDF is just the integral of the pdf, and showed you how to go from one to the other. Next we introduced several different kinds to give you a feel for them (the gaussian, the Laplace, the exponential, and the uniform). Up to that point if we’ve done our job right, you’ve felt like everything we were talking about was coherently connected and related to each other.
Then we took a hard left.
A few posts back we introduced Bayes’ theorem. To make things more confusing we introduced the Poisson pdf, and finally the past two posts we’ve looked at two ways we can relate the binomial distribution to the gaussian distribution, and to the Poisson distribution. Yes, that’s a whole lot of linking to our other posts, but they are now easier to go back and reference as we connect the dots.
Now let’s start with the recent stuff and work our way back. The Poisson approximation ends up being a special case of the binomial distribution, one where the number of trials is large. That is why we introduced the Poisson pdf, but we also gave another approximation, one using the de Moivre-Laplace theorem (the gaussian approximation).
This is related to another concept we talked about several posts back called the central limit theorem. Really the gaussian approximation is also a special case, not of the binomial distribution itself, but of the central limit theorem. In that case we can use the gaussian to approximate independent Bernoulli trials. Bernoulli trials are a fancy way of saying each trial has only two outcomes, a success or a failure.
This is why we also introduced Bayes’ theorem and technically it was so nice we did it twice! Bernoulli trials need us to have independent events, meaning one outcome cannot affect another. Bayes’ theorem is sort of an extension on that idea, but lets us look at the effect dependent events have on each other.
So yeah, that’s it, it’s all connected in a odd roundabout sort of way. Granted, it’s a short post, but it’s nice to explicitly show where all this comes from and why. Now, you may notice Bayes’ theorem is somewhat a dangling connection in all this, yes it’s related, but in around about sort of way, not as direct as say the central limit theorem is related. Well next up we are going to look at dependent events and how that impacts what our pdf looks like, I’ll give you a hint, if you read Bayes’ theorem, you will know just HOW the pdf will change. Or if you’re one for surprises, just wait until the next post where we cover it!
Until next time, don’t stop learning!
*My dear readers, please remember that I make no claim to the accuracy of this information; some of it might be wrong. I’m learning, which is why I’m writing these posts and if you’re reading this then I am assuming you are trying to learn too. My plea to you is this, if you see something that is not correct, or if you want to expand on something, do it. Let’s learn together!!
But enough about us, what about you?