It has been a busy week, as you’ve seen I’ve had not one, but two Skype a scientist sessions in one day, then we did some outreach with some local 4th graders, yesterday I even posted photos of the event. Yesterday I also had a conference call to help set up an event that I’m helping run for neurotech entrepreneurs. If you follow me on twitter, you know I’ve pushed people to apply for it. So let’s talk about what I’ve got going on today!
As you may have seen, yesterday we had our lab tour group come through. So today I just wanted to share a few photos from the time they had with us, it was a lot of fun and hopefully we inspired a few kids!
When doing your advanced degrees (Masters or PhD) you end up with a lot of different responsibilities that have nothing to do with your education. That isn’t to say that it isn’t an important thing or that I hate doing it, you just don’t learn anything with regard to your study subject. Today is one of those days, let’s talk about it.
Today is Skype a Scientist day! Every term I volunteer my time and try to explain my journey, my research, and my pitfalls with students all over the US. Technically this is my second session (of six!) this term, but I wanted to talk about why I do what I do today. So if you’re interested in what it’s all about, keep reading.
Like we did with question 1, this will be the solution to the question we posed in the last post, if you haven’t tried to solve it yet, go give it a shot. If you have and are dying to check your answer, then let’s look at the solution.*
For those of you who have been following along, today we are going to post another question and in the next post we will give the solution. This will be another two random variable question and we’ve covered everything you need to solve it in our previous posts. So with that, let’s get to today’s question.*
Hopefully if you’re reading this you saw our last post, where we gave the question we will solve today. If you haven’t had a chance to try and solve it, please feel free to stop and give it a shot. If you’re ready to see how we solve it, then let’s get started.*
Well now that we’ve had a minute to take a breath, let’s try out something new. In this post I will give the question and in the next post we can work out the answer. For those of you playing at home, this will be a good way to check your knowledge and for me, it will give the the chance to do the same.*
We will pick back up tomorrow most likely. Today however is one of those much needed rest days. Don’t worry, we’re still going to get into it, just not today.
Until next time, don’t stop learning!
A brief word, since I don’t have time for a full post today. If/when you start down the path towards your PhD or Masters, remember that you need to balance work and life. That isn’t to say you need to find a super exciting hobby, more like you need to unwind every once and awhile. There is a lot of burnout in academia, students often find themselves overwhelmed and have a higher rate of depression than the average public. It’s okay to need help, it’s okay to say you cannot do something, and most importantly, it is okay to take time for yourself.
Okay quick example, still not super difficult, but one we can work out to a complete solution. We’ve gone over a few examples now, but we’re going to go over a few more for both my benefit and yours. So let’s dive in.*
Well our last post we took a break and talked zombies! While I would love to do a whole month of halloween topics, this year is not the time, maybe next year. In any case today we are going to go over another example of a single function of two random variables. This is going to be slightly more complex than our first example, however it won’t be extremely complex (we’re working towards it). So let’s take a look shall we?*
Time for a break from stochastic processes, at least for the moment. Every year here we update and post our favorite Halloween tradition! So today we bring you the science fact and fiction behind the undead. Zombies, those brain loving little things are everywhere. Sure, we are all familiar with the classic zombie, but did you know that we aren’t the only zombie lovers out there? It turns out that nature has its own special types of zombies, but this isn’t a science fiction movie, this is science fact! Sometimes fact can be scarier than fiction, so let’s dive in. Let’s talk zombies.
Hopefully at this point we’ve demystified more than just a few concepts at this point. Today we are going to look at one function of two random variables. Originally I was going to break into a joint CDF example that involved dependent variables, but it turns out my book doesn’t cover that! Oops, guess I should’ve read ahead. In any case let’s talk functions!*
Well here we are again, today we are talking functions of two random variables. If you’re looking for the beginning, this isn’t it, but you can read the introduction here. If you’ve kept up, then you’re ready to go over the example we have today, so let’s get started.*
As promised, today we are going to talk about two random variables that are not independent. This means that the individual probabilities don’t sum to be equal to the joint probability (like they did yesterday). Like our normal CDF, we can find a CDF for two random variables, but let’s take a look at how this works.*
I was debating about not posting anything today. It’s been a bit rough for me these past few days. However, I’m going to write a little something today and tomorrow to introduce two random variables (so we don’t skip a day). This is going to be a lot like our single random variable examples, but (of course) more complex, let’s take a look at what I mean.*
Well maybe yesterday was confusing, maybe it wasn’t. In any case, today should clarify some things for you if you are confused and should make things more clear if you are not. Today we are going to go over a quick example of what a function involving one random variable looks like. Now you may notice I keep saying one, that’s because you can technically have as many variables as you want, but since this is fairy complex stuff, let’s just stick with the one for now.*
Now that we’ve looked at conditional probabilities we can talk about other things we can do with random variables. If you’ve been keeping up with us so far, then this shouldn’t be too crazy of an idea, really all we are going to do today is take a random variable and transform it somehow. Interested? Let’s go!*
Up to now we’ve been dealing with single variable pdf and the corresponding CDF. We said that these probabilities relied on the fact that our variable of interest was independent. However, what if we knew some property that impacted our probability? Today we are talking conditional probability and that is the question we will be answering. It’s going to be a long, long post so plan accordingly.*
Maybe we shouldn’t phrase it this way, since there is still quite a few days left of 365DoA, but you made it to the end! No, not THE end, but if you’ve been following along the past few posts we’ve introduced several seemingly disparate concepts and said, “don’t worry they are related,” without telling you how. Well today, like a magician showing you how to pull a rabbit from a hat, let’s connect the dots and explain why we introduced all those concepts!*
Over the past couple of days, I’ve been talking about several different types of pdf and the associated C.D.F. Hopefully, we have a clear understanding of each of those concepts, for those of you scratching your head, I would recommend you start here at this other post. Otherwise, let’s (finally) look at a real life example using the exponential pdf!*
Well here we are again… maybe unless you’re new, in which case welcome. If you are just joining us we are talking p.d.f. no not the file format, the probability density function version. If you’re new, you may want to start back here(ish) If not, then let’s talk the strangely similar laplace distribution.*
Well, it has been a week, don’t even get me started. But if you’re here you don’t want to hear me complain about my week, that isn’t why we come together! Well today let’s do a bit of a dive into the exponential p.d.f. I hope you’ve brushed up, because this is going to get interesting.*
Day 30 already! Where does the time go? It feels like we just started this whole project and it probably wouldn’t be a good idea to look at the remaining time to completion, so let’s not and just enjoy the nice round 30. We will get back to our p.d.f another day, but today is going to be short. That’s what I usually say before typing out 10 pages worth of information so to avoid that, let’s touch on something important, but something I can do briefly. Today we’re talking about confidence intervals*
Well, apparently you guys really appreciated my probability density function posts. It’s good to see people interested in something a little less well-known (at least to me). So for those of you just joining us, you’ll want to start at part 1 here. For those of you who are keeping up with the posts, let’s review and then look at specific functions. Namely let’s start by going back to our gaussian distribution function and talk about what’s going on with that whole mess. It will be fun, so let’s do it!*
Today we were going to do another deep dive into the p.d.f and C.D.F. relationship. Specifically today we were going to talk about specific p.d.f. functions and why we use them, however… I am not doing so hot today, so instead we are going to back track just a bit and talk about what how a C.D.F. differs from our p.d.f. even though we kind of covered it, it would be nice to be clear and I can do this in a (fairly) short post for the day. So that said, let’s get started and we will pick up our p.d.f. discussion next time (maybe).*
Oh hi didn’t see you there. Today is part 2 of the probability density functions notes (posts?), whatever we are calling these. You can read part 1 here as you should probably be familiar with the (super confusing) notation we use to describe our p.d.f. and our C.D.F. now that we’ve given that lovely disclaimer, let’s look once again at probability density functions!*
We are well on our way to wrapping up week 4, what a ride it’s been! It’s been a long day for me, so today might be short. However, I really, really, really want to break into probability density functions. This topic is going to be a bit more advanced than some of the things we’ve covered (IE more writing) so it will most definitely be broken up. Let’s look at why and discover the wonderful weirdness of probability density functions!*
Now it seems like we are getting somewhere. Last post we covered z-score and you can read that if you haven’t already, it might be good to familiarize yourself with it since today we are going to talk p-value and the difference between z-score and p-value. That said, let’s dive in and look at the value in the p-value.*
So if you recall from last post… well I’m not linking to it. It was hellishly personal and frankly I’m still attempting to recover from it. We’re going to take it light this time and we can do a deep dive into something in another post. For that reason, let’s talk about z-score and what exactly it is, I mean we used it in this post and never defined it formally, so let’s do that. Let’s talk z-score!*
Okay, so not every post has to be strictly academic. If my twitter feed is any indication yesterday was world suicide prevention day. So with a heavy heart I have not one, but two very personal stories regarding suicide. Obviously this is a content warning for those wanting to go further, we will be dealing with suicide, death, and suicidal ideation.
Technically we could call this parametric statistics part 2. However, since we are covering nonparametric statistics and more importantly the difference between parametric and nonparametric statistics, it would seem that this title makes more sense. As usual with a continuation, you probably want to start at the beginning where we define parametric statistics. Ready to get started?*
Well my lovely readers, we’ve made it to the three week mark, 5.7% of the way through! Okay maybe that doesn’t seem like a big deal written like that, but hey it’s progress. So last post we had our independence day, or rather defined what it meant to have independent events vs. dependent events. We also said it was an important assumption in parametric statistics that our events are independent, but then we realized we never defined what parametric statistics even is, oops. So let’s stop dragging our feet and talk parametric statistics!*
Because we introduced the central limit theorem last post, it’s time to introduce another important concept. The idea of independent events, while this may seem intuitive, it is one of the assumptions we make in parametric statistics, another concept we will define, but for now let’s jump into independence.*
Well here we are again, if you recall from our last post, we talked Bonferroni Correction. You may also recall that when the post concluded, there was no real topic for today. Well after some ruminating, before we jump into more statistics, we should talk about the central limit theorem. So let’s do a quick dive into what that is and why you should know it!*
By now we are masters of statistics… right? Okay, not really, but we are getting there. So far we’ve covered two types of errors, type 1 which you can read about here, and type 2 which you can read about here. Armed with this new knowledge we can break into a way to correct for type 1 errors that come about from multiple comparisons. Sound confusing? Well, not for long, let’s break it down and talk Bonferroni.*
Last post we did a quick bit on type 1 errors. As with anything, there is more than one way to make an error. Today we are talking type 2 errors! They are related in the sense and we’ll go over what that means and compare the two right… now!*