Into the unknown
Today is going to be a somewhat anxious day for me. It’s the day I get to crack open my data and see what spills out. There was a process to get to this point of course, it took me about a week, but today with just a few clicks I’m going to see if I have something or if my idea was never meant to be.
When we collect EEG data we can’t just use it outright. There are steps we need to take to clean the data and remove the noise that makes the data unusable. These noise sources could come from anywhere, it’s pretty much a mess before we clean it.
What kind of noise you ask? Well when the sensor moves on the skin that will create an artifact, so walking leaves several different distinct artifacts in the data which can actually be related back to the speed of the person walking. There is also line noise, any time we work in a building with electricity (so basically every building) we have to deal with line noise. In the US that gives us a nice 60hz spike in our data that needs to be dealt with. Frankly we usually will ignore the 60hz range for this reason instead of trying to figure out what is line noise and what is not.
The one I hate the most is muscle artifacts. Eye movement, eye blinks, basically anything that happens with the eyes will leave an artifact in the data that we need to remove. Luckily our lab has a fancy way to do this! We have an adaptive filter that our lab has pioneered to filter out this type of noise. I’ve also used it to remove EKG noise.
There’s a lot. You run the code then you go back and visually inspect it to make sure that you’re happy with it. It’s a process, running the code takes time, then I spend hours modifying variables to make sure I’m happy with how the data looks without completely eliminating the signal we’re interested in finding.
Well last night I finished and I’ve segmented the data for one subject. Segmenting the data means that I pull out the sections of data I’m interested in. We have control conditions, eyes open/eyes closed/eyes moved side to side and up and down conditions, then we have our experiment stuff which usually is followed by a control just to have some pre and post.
So I cleaned my data for all subjects and now I have one subjects worth of data segmented. That should be enough to tell me if I have something. I’m still going to check the rest, I mean no matter what I need to look at all of it, but segmenting the data doesn’t take that long (believe it or not!), so today is the day! Wish me luck.