Well I’m alive, despite the VA’s best efforts. I’m struggling with some serious nausea to the point of vomiting, which has never happened to me before. I’m also in a lot of pain, but that was expected. In any case, start to finish (start as in the operating room and finish as in getting home so +30 minutes or so to the actual finish time) it took ~9 hours total good times for everyone. Anywho, I feel like death so I’ll write more later.
Well we did an experiment. I wish I could talk more about what we did, how we did, and why we did. Alas, I cannot. So instead, let’s talk about the vague how it went metric as in, maybe we found something maybe we didn’t, also this experiment highlights several quarks between the my school lab and the clinical lab.
Today is day one of ten for the time that I have to do some experiments. It’s an awkward time for sure, I mean surgery, school, etc. However, that’s just the way things work in academia, I actually had a break, so I’m ready to go to be honest. Which really means this isn’t horrible timing. I’ve already discussed the million things going on these weeks, but let’s talk about what goes into experiments, really.
Today was an interesting set of events. I had my meeting with my two PI’s (which I still think would make a hilarious television show). The meeting went well, I’m very excited, but I’m also getting ready to be very, VERY busy. Let’s breakdown how it went shall we?
Today’s post was inspired by a conversation I was having yesterday in the comment section (you know who you are and thank you for the questions). I thought I would elaborate on how we record from the brain and why. There are a lot of different ways we can do this, some of them are super invasive and others are non-invasive. In the lab I work in now, we do things non-invasively there are good things about this and bad things about this, so let’s get into it!
Yesterday I mentioned that I had some rat data to go through. It was an old(er) dataset, about five years-old to be exact, but it was one that was going to help me validate some of my findings. Unfortunately there existed no invasive human datasets to compare my human data to, so I needed to find an animal model, in this case a rat model. Let’s discuss the importance.
A few days ago I mentioned I did a thing, well an experimental thing really. It was… fun? It was definitely something. Overall it went well, but I said I would give everyone an update and I try to be a man of my word, so let’s do this.
Today I had my experiment (yay), so now I need to process the data. I also sat in another PhD defense for one of our lab members, so now that I have a free second I wanted to give an update. Expect a longer post tomorrow, but for today, I have sooooo much work to do!
Until next time, don’t stop learning!
Here we are another day another post. Today I will be spending the bulk of my time studying and getting my slides ready for the confrence I’ll be attending next week. That will be … fun? However today is also an important day for one of my fellow students, he’s defending his PhD.
It looks like things are moving a little quicker than I thought for me. As you may or may not know, I’m getting ready to do an experiment. Well, we finally (finally!) finalized the protocol and just in time too. While I won’t make the deadline for my project update, I will have some data to show when we get to the conference, which is a good consolation prize.
I’ve talked about my impending deadlines a lot lately. I also mentioned that I had an experiment that I needed to do to meet a deadline, well it looks like we may or may not meet this goal. Let’s talk about the latest headaches.
Because we introduced the central limit theorem last post, it’s time to introduce another important concept. The idea of independent events, while this may seem intuitive, it is one of the assumptions we make in parametric statistics, another concept we will define, but for now let’s jump into independence.*
Well here we are again, if you recall from our last post, we talked Bonferroni Correction. You may also recall that when the post concluded, there was no real topic for today. Well after some ruminating, before we jump into more statistics, we should talk about the central limit theorem. So let’s do a quick dive into what that is and why you should know it!*
By now we are masters of statistics… right? Okay, not really, but we are getting there. So far we’ve covered two types of errors, type 1 which you can read about here, and type 2 which you can read about here. Armed with this new knowledge we can break into a way to correct for type 1 errors that come about from multiple comparisons. Sound confusing? Well, not for long, let’s break it down and talk Bonferroni.*
Last post we did a quick bit on type 1 errors. As with anything, there is more than one way to make an error. Today we are talking type 2 errors! They are related in the sense and we’ll go over what that means and compare the two right… now!*
We did it, we cracked the coin conundrum! We managed the money mystery! We checked the change charade! We … well you get the idea. Last post we (finally) determined if our coin was bias or not. Don’t worry, I won’t spoil it for you if you haven’t read it yet. I actually enjoyed working through a completely made up problem, so if you haven’t read it, you really should. Today we’re going to talk dogs, you’ll see what I mean, so let’s dive in.*
It looks like we’ve arrived at part 3 of what is now officially a trilogy of posts on statistical significance. There is so much more to say I don’t want to quite call this the conclusion. Instead, let’s give a quick review of where we left off and we can get back to determining if an observed value is significant.*
Well here we are two weeks into 365DoA, I was excited until I realized that puts us at 3.8356% of the way done. So if you remember from last post we’ve started our significance talk, as in what does it mean to have a value that is significant, what does that mean exactly, and how to do we find out? Today is the day I finally break, we’re going to have to do some math. Despite my best efforts I don’t think we can finish the significance discussion without it and still manage to make sense. With that, let’s just dive in.*
If you’ve read my last post I hinted that today we would discuss filtering. Instead I think I want to take this a different direction. That isn’t to say we won’t go over filtering, we most definitely will. Today I want to cover something else though, significance. So you’ve recorded your signal, took an ensemble average, and now how do we tell if it actually means something, or if you are looking at an artificial or arbitrary separation in your data (IE two separate conditions lead to no difference in your data). Let’s look at significance.*
Noise, it can be troublesome. Whether you are studying and someone is being loud or you are trying to record something, noise is everywhere <stern look at people who talk during movies>. Interestingly enough the concept of noise in a signal recording sense isn’t all too different from dealing with talkative movie goers, so let’s talk noise!*