Well after a few days of curing time, my laptop is alive again! In fact I’m writing this on it now. YAY!! I honestly don’t have the money to replace the thing and I already have much needed car repairs to attend to, so this cannot break on me yet.
Well it’s post Christmas day and I have to say my stealth wrapping was a hit. Of course, after the first gift (or even the second gift) people catch on, but overall it was a lot of fun and I even got a few apologies for some of the ill will based on my apparent lack of wrapping abilities. I HIGHLY recommend giving it a shot. So let’s talk about the image above for a minute.
Well it finally happened. My laptop looks pretty dead. Right now I have it in pieces while I try to figure out what went wrong with it. Thankfully I have a desktop computer that I use for all my heavy computing as a back up. Just one more expense I guess, I should be grateful that it wasn’t something more serious like the hard drive going out.
So short post, I know, but I have to get this fixed or find some sort of resolution since my laptop is an important part of how I get work done. I’ll have a longer post tomorrow, but for now I think that is it.
Until next time, don’t stop learning!
For those of you following along, I’ve been trying to crack a predictive model using some novel (read: super secret PhD work) neural data. It’s been a journey and I’ve trained and tested about a dozen or so models, with varying success. Things have been flying pretty smooth the past few weeks as I try to create the best model I could possibly create. Unfortunately, technology had other plans for me.
Technically we could call this parametric statistics part 2. However, since we are covering nonparametric statistics and more importantly the difference between parametric and nonparametric statistics, it would seem that this title makes more sense. As usual with a continuation, you probably want to start at the beginning where we define parametric statistics. Ready to get started?*
Well my lovely readers, we’ve made it to the three week mark, 5.7% of the way through! Okay maybe that doesn’t seem like a big deal written like that, but hey it’s progress. So last post we had our independence day, or rather defined what it meant to have independent events vs. dependent events. We also said it was an important assumption in parametric statistics that our events are independent, but then we realized we never defined what parametric statistics even is, oops. So let’s stop dragging our feet and talk parametric statistics!*
Because we introduced the central limit theorem last post, it’s time to introduce another important concept. The idea of independent events, while this may seem intuitive, it is one of the assumptions we make in parametric statistics, another concept we will define, but for now let’s jump into independence.*
Well here we are again, if you recall from our last post, we talked Bonferroni Correction. You may also recall that when the post concluded, there was no real topic for today. Well after some ruminating, before we jump into more statistics, we should talk about the central limit theorem. So let’s do a quick dive into what that is and why you should know it!*
By now we are masters of statistics… right? Okay, not really, but we are getting there. So far we’ve covered two types of errors, type 1 which you can read about here, and type 2 which you can read about here. Armed with this new knowledge we can break into a way to correct for type 1 errors that come about from multiple comparisons. Sound confusing? Well, not for long, let’s break it down and talk Bonferroni.*
Last post we did a quick bit on type 1 errors. As with anything, there is more than one way to make an error. Today we are talking type 2 errors! They are related in the sense and we’ll go over what that means and compare the two right… now!*
We did it, we cracked the coin conundrum! We managed the money mystery! We checked the change charade! We … well you get the idea. Last post we (finally) determined if our coin was bias or not. Don’t worry, I won’t spoil it for you if you haven’t read it yet. I actually enjoyed working through a completely made up problem, so if you haven’t read it, you really should. Today we’re going to talk dogs, you’ll see what I mean, so let’s dive in.*
It looks like we’ve arrived at part 3 of what is now officially a trilogy of posts on statistical significance. There is so much more to say I don’t want to quite call this the conclusion. Instead, let’s give a quick review of where we left off and we can get back to determining if an observed value is significant.*
Well here we are two weeks into 365DoA, I was excited until I realized that puts us at 3.8356% of the way done. So if you remember from last post we’ve started our significance talk, as in what does it mean to have a value that is significant, what does that mean exactly, and how to do we find out? Today is the day I finally break, we’re going to have to do some math. Despite my best efforts I don’t think we can finish the significance discussion without it and still manage to make sense. With that, let’s just dive in.*
If you’ve read my last post I hinted that today we would discuss filtering. Instead I think I want to take this a different direction. That isn’t to say we won’t go over filtering, we most definitely will. Today I want to cover something else though, significance. So you’ve recorded your signal, took an ensemble average, and now how do we tell if it actually means something, or if you are looking at an artificial or arbitrary separation in your data (IE two separate conditions lead to no difference in your data). Let’s look at significance.*
Noise, it can be troublesome. Whether you are studying and someone is being loud or you are trying to record something, noise is everywhere <stern look at people who talk during movies>. Interestingly enough the concept of noise in a signal recording sense isn’t all too different from dealing with talkative movie goers, so let’s talk noise!*
So you wanna use a spectrogram… but why? What does a spectrogram do that we can’t do using some other methods for signal processing? As it turns out, there is a lot of reasons you may want to use the spectrogram and today we are going to cover some of those reasons and number four may shock you! (okay not really, what do you think this is a clickbait website?)*
Well ten days in and we’ve just introduced the idea of the spectrogram. While a lot of this information is just the broad strokes, I like to think that we’ve covered enough to give you a good idea about how to use these tools and what they are used for. However, we do need to discuss a limitation to the spectrogram, something called the banana of uncertainty, okay not quite the name, but you’ll see why I keep calling it that.*
Last post we introduced a new tool in our arsenal of signal processing analysis, the spectrogram. Without knowing how to read it, it just looks sort of like a colored mess. Don’t get me wrong, it is an interesting looking colored mess, but a mess nonetheless. Well today we are going to talk about how to interpret the plot and why exactly we would ever use this seeming monstrosity.*
To (somewhat) continue with our signal processing theme that we have going on at the moment, over the next few days, let’s look at something called the spectrogram. It’s three dimensions of fun!*
Waves! We’re officially one week through 365 Days of Academia! Woo! 1 week down, 51(.142…) weeks left! Let’s wrap up this weeks theme (there wasn’t originally a theme, but it kind of ended up that way) by talking about other ways we can get to the frequency domain. Specifically, let’s stop the wave puns and let’s talk wavelets!*
Okay, if you’ve been keeping up with these posts, we know about Welch’s method, Thomson’s method, the things that make them different, and the things that make them similar. The thing that both of these transforms rely on is the Fourier transform. What is the Fourier transform? Well, something I probably should have covered first, but whatever this is my blog we do it in whatever order we feel like, so let’s dive in!*
One day someone looked at the windowed fourier transform and said, “Don’t be such a square!” and thus window functions were invented. If you believe that, then I have an island for sale, real cheap. But seriously, let’s do a dive into what a window function is and why the heck there are so many of them, because there ARE a LOT! So let’s get started!*
For those of us old enough to remember the days of the Apple II, you know that storage has exponentially increased. Even just 10 years ago 20+ gigs of data seemed huge, now my cellphone has 64 gigs. Yet we still need more data storage and we are looking for new ways to get it. Now a way to use weak molecular bonding interactions to create well-ordered and stable metal–organic monolayers with optoelectronic properties has been found. The development could form the basis for the scalable fabrication of molecular optoelectronic devices.
Everything is silicon based, well mainly your computer, your TV, your ipad, and pretty much every piece of electronics in existence. Still the world turns and so does technology; at a similarly fast pace no less. Even as the 2014 Nobel Prize in Physics has enshrined light emitting diodes (LEDs) as the single most significant and disruptive energy-efficient lighting solution of today, scientists around the world continue unabated to search for the even-better-bulbs of tomorrow. In this search we are now ditching silicon for new carbon-based electronics.
It’s a project that would make Tesla proud. Just imagine being able to instantaneously run an optical cable or fiber to any point on earth, or even into space. That’s what researchers are trying to do. Did I mention it was instantaneous and involved no connection other than the air around us? Well if you are as excited as I am, then you should read on! If not, two words, laser weapons!!
Are you real? What is ‘real’, more of a philosophy question than a scientific one, but what if a computer worked like your brain? What if, one day the line between computer and human were blurred? That day might be coming sooner than you think.
Currently there are two major problems with designing a robotic brain. The first is hardware, the brain is an incredibly complex thing that we don’t even fully understand, even if we could theoretically produce something close to that work of art there is that second problem– The software, designing software to take advantage of that type of power would take something short of genius to do, especially if it were going to be something easy enough that you or I could use.
The six million dollar man has nothing on these cockroaches. We can rebuild them, better than they were before. We have the technology, and as it turns out, we do! While DNA robots may not, in itself be a new thing,a study published in Nature Nanotechnology is definitely not only new, but it’s something to talk about.
You’ve seen a swarm of bees, you’ve seen a swarm of ants. But now, a research group at the Harvard School of Engineering and Applied Sciences have introduced us to a new kind of swarm, a swarm of robots.
The idea stems from, of all things, termites.
Normally, when you have any sort of large scale building operation, like a home for example, you have someone in charge telling each individual what to do. There are specialized functions for each person, a electrician, a carpenter, etc and if one of them walks out on the project, the project is stalled until they are replaced.