An early present
Since my deadline has come and gone I can look around for a moment, but more importantly I don’t feel the pressure to hyperfocus on a single project. So today instead of working on that dataset I’m switching to a dataset I need to get done before my PhD proposal defense. It’s data that will (assuming I find something) further help make my case for studying my new “super secret” technique for the next 2-3 years depending on how long everything takes.
Both my Co-PI and I have been dying to analyze this dataset. We collected it specifically for a collaborative effort between him and I to incorporate my technique into work he wants to do. So we ran a quick pilot experiment to get some data for it. Using some of the things we’ve learned from previous attempts to use my technique we modified how we did things to improve the conclusions we could draw from the data.
We also collected a whole lot more trials. Orders of magnitude larger, which means we can be more certain about what we see in the dataset. Because any sort of electrophysiological recordings will have a certain amount of white noise added to it, by repeating trials we can remove that noise. Not completely mind you, but it averages out because for white noise to be white it needs to be random so sometimes it will be positive, sometimes negative, etc. Meanwhile the thing we are interested in should remain fairly consistent across trials. Taking the average across trials should cancel out most of the white noise and will have minimal effect on the thing we are studying (assuming the thing we are studying is time locked, which in our case we are using electrical stimulation so it’s time locked and very repeatable since we’re using the same location and amplitude across trials).
The effect of averaging diminishes the more trials you have (since you get closer to canceling out the white noise) so after about 100 trials your improvement by adding more trials typically isn’t worth the extra time. In our case, we had 1000 trials because we could quickly collect the data, all in all it took less than an hour to collect 1000 trials per condition we tested and we had 8 conditions. Well 8 plus three bonus conditions I decided to toss in at the last minute because I really wanted the data and the setup/logistics/planning/etc all take forever, but that’s kind of how things go in the lab and we had the time so might as well make the most of it.
So late last night, despite my better judgement I started processing the data knowing full well I wasn’t going to finish doing it, have to save half way and finish today. Well today I plan on finishing the data cleaning and start the segmentation and analysis. I’m super excited to see what this dataset has for me. If all goes well this will be the final piece of the puzzle and the thing that gets my Co-PI more onboard with this research. The burden of proof is high, but I think I’m getting close so now it’s just a matter of getting there.
It’s all very exciting and I should have some answers today, so tomorrow if I seem either incoherently excited or overly depressed you’ll know how it went!