Now that we’ve looked at conditional probabilities we can talk about other things we can do with random variables. If you’ve been keeping up with us so far, then this shouldn’t be too crazy of an idea, really all we are going to do today is take a random variable and transform it somehow. Interested? Let’s go!*
Up to now we’ve been dealing with single variable pdf and the corresponding CDF. We said that these probabilities relied on the fact that our variable of interest was independent. However, what if we knew some property that impacted our probability? Today we are talking conditional probability and that is the question we will be answering. It’s going to be a long, long post so plan accordingly.*
Maybe we shouldn’t phrase it this way, since there is still quite a few days left of 365DoA, but you made it to the end! No, not THE end, but if you’ve been following along the past few posts we’ve introduced several seemingly disparate concepts and said, “don’t worry they are related,” without telling you how. Well today, like a magician showing you how to pull a rabbit from a hat, let’s connect the dots and explain why we introduced all those concepts!*
Over the past couple of days, I’ve been talking about several different types of pdf and the associated C.D.F. Hopefully, we have a clear understanding of each of those concepts, for those of you scratching your head, I would recommend you start here at this other post. Otherwise, let’s (finally) look at a real life example using the exponential pdf!*
Well here we are again… maybe unless you’re new, in which case welcome. If you are just joining us we are talking p.d.f. no not the file format, the probability density function version. If you’re new, you may want to start back here(ish) If not, then let’s talk the strangely similar laplace distribution.*
Well, it has been a week, don’t even get me started. But if you’re here you don’t want to hear me complain about my week, that isn’t why we come together! Well today let’s do a bit of a dive into the exponential p.d.f. I hope you’ve brushed up, because this is going to get interesting.*
Day 30 already! Where does the time go? It feels like we just started this whole project and it probably wouldn’t be a good idea to look at the remaining time to completion, so let’s not and just enjoy the nice round 30. We will get back to our p.d.f another day, but today is going to be short. That’s what I usually say before typing out 10 pages worth of information so to avoid that, let’s touch on something important, but something I can do briefly. Today we’re talking about confidence intervals*
Well, apparently you guys really appreciated my probability density function posts. It’s good to see people interested in something a little less well-known (at least to me). So for those of you just joining us, you’ll want to start at part 1 here. For those of you who are keeping up with the posts, let’s review and then look at specific functions. Namely let’s start by going back to our gaussian distribution function and talk about what’s going on with that whole mess. It will be fun, so let’s do it!*
Today we were going to do another deep dive into the p.d.f and C.D.F. relationship. Specifically today we were going to talk about specific p.d.f. functions and why we use them, however… I am not doing so hot today, so instead we are going to back track just a bit and talk about what how a C.D.F. differs from our p.d.f. even though we kind of covered it, it would be nice to be clear and I can do this in a (fairly) short post for the day. So that said, let’s get started and we will pick up our p.d.f. discussion next time (maybe).*
Oh hi didn’t see you there. Today is part 2 of the probability density functions notes (posts?), whatever we are calling these. You can read part 1 here as you should probably be familiar with the (super confusing) notation we use to describe our p.d.f. and our C.D.F. now that we’ve given that lovely disclaimer, let’s look once again at probability density functions!*
We are well on our way to wrapping up week 4, what a ride it’s been! It’s been a long day for me, so today might be short. However, I really, really, really want to break into probability density functions. This topic is going to be a bit more advanced than some of the things we’ve covered (IE more writing) so it will most definitely be broken up. Let’s look at why and discover the wonderful weirdness of probability density functions!*
Now it seems like we are getting somewhere. Last post we covered z-score and you can read that if you haven’t already, it might be good to familiarize yourself with it since today we are going to talk p-value and the difference between z-score and p-value. That said, let’s dive in and look at the value in the p-value.*
So if you recall from last post… well I’m not linking to it. It was hellishly personal and frankly I’m still attempting to recover from it. We’re going to take it light this time and we can do a deep dive into something in another post. For that reason, let’s talk about z-score and what exactly it is, I mean we used it in this post and never defined it formally, so let’s do that. Let’s talk z-score!*
Okay, so not every post has to be strictly academic. If my twitter feed is any indication yesterday was world suicide prevention day. So with a heavy heart I have not one, but two very personal stories regarding suicide. Obviously this is a content warning for those wanting to go further, we will be dealing with suicide, death, and suicidal ideation.
Technically we could call this parametric statistics part 2. However, since we are covering nonparametric statistics and more importantly the difference between parametric and nonparametric statistics, it would seem that this title makes more sense. As usual with a continuation, you probably want to start at the beginning where we define parametric statistics. Ready to get started?*
Well my lovely readers, we’ve made it to the three week mark, 5.7% of the way through! Okay maybe that doesn’t seem like a big deal written like that, but hey it’s progress. So last post we had our independence day, or rather defined what it meant to have independent events vs. dependent events. We also said it was an important assumption in parametric statistics that our events are independent, but then we realized we never defined what parametric statistics even is, oops. So let’s stop dragging our feet and talk parametric statistics!*
Because we introduced the central limit theorem last post, it’s time to introduce another important concept. The idea of independent events, while this may seem intuitive, it is one of the assumptions we make in parametric statistics, another concept we will define, but for now let’s jump into independence.*
Well here we are again, if you recall from our last post, we talked Bonferroni Correction. You may also recall that when the post concluded, there was no real topic for today. Well after some ruminating, before we jump into more statistics, we should talk about the central limit theorem. So let’s do a quick dive into what that is and why you should know it!*
By now we are masters of statistics… right? Okay, not really, but we are getting there. So far we’ve covered two types of errors, type 1 which you can read about here, and type 2 which you can read about here. Armed with this new knowledge we can break into a way to correct for type 1 errors that come about from multiple comparisons. Sound confusing? Well, not for long, let’s break it down and talk Bonferroni.*
We did it, we cracked the coin conundrum! We managed the money mystery! We checked the change charade! We … well you get the idea. Last post we (finally) determined if our coin was bias or not. Don’t worry, I won’t spoil it for you if you haven’t read it yet. I actually enjoyed working through a completely made up problem, so if you haven’t read it, you really should. Today we’re going to talk dogs, you’ll see what I mean, so let’s dive in.*
It looks like we’ve arrived at part 3 of what is now officially a trilogy of posts on statistical significance. There is so much more to say I don’t want to quite call this the conclusion. Instead, let’s give a quick review of where we left off and we can get back to determining if an observed value is significant.*
Well here we are two weeks into 365DoA, I was excited until I realized that puts us at 3.8356% of the way done. So if you remember from last post we’ve started our significance talk, as in what does it mean to have a value that is significant, what does that mean exactly, and how to do we find out? Today is the day I finally break, we’re going to have to do some math. Despite my best efforts I don’t think we can finish the significance discussion without it and still manage to make sense. With that, let’s just dive in.*
If you’ve read my last post I hinted that today we would discuss filtering. Instead I think I want to take this a different direction. That isn’t to say we won’t go over filtering, we most definitely will. Today I want to cover something else though, significance. So you’ve recorded your signal, took an ensemble average, and now how do we tell if it actually means something, or if you are looking at an artificial or arbitrary separation in your data (IE two separate conditions lead to no difference in your data). Let’s look at significance.*
Noise, it can be troublesome. Whether you are studying and someone is being loud or you are trying to record something, noise is everywhere <stern look at people who talk during movies>. Interestingly enough the concept of noise in a signal recording sense isn’t all too different from dealing with talkative movie goers, so let’s talk noise!*
So you wanna use a spectrogram… but why? What does a spectrogram do that we can’t do using some other methods for signal processing? As it turns out, there is a lot of reasons you may want to use the spectrogram and today we are going to cover some of those reasons and number four may shock you! (okay not really, what do you think this is a clickbait website?)*
Well ten days in and we’ve just introduced the idea of the spectrogram. While a lot of this information is just the broad strokes, I like to think that we’ve covered enough to give you a good idea about how to use these tools and what they are used for. However, we do need to discuss a limitation to the spectrogram, something called the banana of uncertainty, okay not quite the name, but you’ll see why I keep calling it that.*
Last post we introduced a new tool in our arsenal of signal processing analysis, the spectrogram. Without knowing how to read it, it just looks sort of like a colored mess. Don’t get me wrong, it is an interesting looking colored mess, but a mess nonetheless. Well today we are going to talk about how to interpret the plot and why exactly we would ever use this seeming monstrosity.*
Waves! We’re officially one week through 365 Days of Academia! Woo! 1 week down, 51(.142…) weeks left! Let’s wrap up this weeks theme (there wasn’t originally a theme, but it kind of ended up that way) by talking about other ways we can get to the frequency domain. Specifically, let’s stop the wave puns and let’s talk wavelets!*
Okay, if you’ve been keeping up with these posts, we know about Welch’s method, Thomson’s method, the things that make them different, and the things that make them similar. The thing that both of these transforms rely on is the Fourier transform. What is the Fourier transform? Well, something I probably should have covered first, but whatever this is my blog we do it in whatever order we feel like, so let’s dive in!*
One day someone looked at the windowed fourier transform and said, “Don’t be such a square!” and thus window functions were invented. If you believe that, then I have an island for sale, real cheap. But seriously, let’s do a dive into what a window function is and why the heck there are so many of them, because there ARE a LOT! So let’s get started!*
Leakage, it’s never a good thing. For today’s post we’re going to cover a very important topic. Spectral leakage, it’s a big reason why spectral density estimation is well, an estimation. The other reason it is an estimation is because the fourier transform is an approximation of the original signal, but the Fourier transform is a whole other post on its own. So let’s talk leakage!*
In our last post we introduced the two main characters in this story of spectrogram. On one end we have Welch’s method (pwelch) on the other end we have the Thomson multitaper method (pmtm). As promised here is a
awful basic breakdown of why is more than one way to compute power spectral density (in fact there are several, far more than the two I’m talking about). So, let’s just dig right in!*
This is a (somewhat) continuation on what we were discussing in the previous post. We covered the pwelch MATLAB function, this time we will cover the PMTM function, this function uses the Thomson multitaper method to calculate power spectral density. We can do a deep dive into the differences between the two next time, but for now let’s talk about the command itself.*
Signal processing, it’s complex, there are a million ways to go about processing a signal, and like life, there is no best way to go about doing it. Trust me, it is as frustrating as it sounds. Today’s scratch pad note is on power spectral density or PSD for short. So let’s dive in.*
Suddenly your absent-minded thoughts are shattered by a loud noise. Quickly you look around, to the left of you, you see it, and a child has been shot, you see them bleeding heavily. People are standing around with their phones, some calling emergency services, some filming, but most looking confused and scared. No one is actively trying to help; you hear that they are afraid that the person, or persons, who shot the child is still around. What do you do next, do you choose indifference, or do you help?
Right now you are probably thinking that I am going to unleash some poorly thought out diatribe about president elect Trump. No, that is not going to happen. It is not going to happen because he is not the problem, you are the problem, I am the problem, and we are the problem. That goes for those of you who are atheists, Catholics, Muslims, conservatives and liberals, or anything in-between.
In the spirit of Halloween we bring you the science fact and fiction behind the undead. Zombies, those brain loving little guys, (and girls) are everywhere. Sure, we are all familiar with the classic zombie, but did you know that we aren’t the only zombie lovers out there? It turns out that nature has its own special types of zombies, but this isn’t a science fiction movie, this is science fact! Sometimes fact can be scarier than fiction, so let’s dive in.
New research reveals that certain alterations in the brain may be present in pedophiles, with differences between hands-on offenders and those who have not sexually offended against children.
There are plenty of reasons it’s important to maintain a healthy weight, and now you can add one more to the list: It may be good for your brain. Researchers have found that having a higher body mass index, or BMI, can negatively impact cognitive functioning in older adults.
Neu5Gc, a non-human sialic acid sugar molecule common in red meat that increases the risk of tumor formation in humans, is also prevalent in pig organs, with concentrations increasing as the organs are cooked, a study has found. The research suggests that Neu5Gc may pose a significant health hazard among those who regularly consume organ meats from pigs.