The next big experiment

Well things are moving fast around here, like I predicted they would this year. Of course, things are currently going better than I had hoped, but that could change. Last week was a busy one and next week will be no different, but next week is a particularly big week, because I’m going to be doing another experiment for a somewhat different project. Yep, another “big idea” experiment is coming.
A while back I had a “big idea” and since then we’ve been rushing to turn it from an idea to a reality. Research can move both at breakneck speed and super slow depending on your frame of reference. In my case, things have been moving slow, but maybe to an outsider that isn’t the case. Since that big idea, we’ve had exactly one experiment for it. I didn’t mention it last time because, well it was sort of disappointing.
The results weren’t great. In fact, it turns out the equipment issues we were having were worse than I had anticipated and we had absolutely nothing, like zero data. We recorded stuff, but it was literally all 60 Hz line noise (noise from electricity). It was a perfect example of line noise, but it was pure line noise, which means I had zero data. We’ve since tested and retested all the equipment, but haven’t been able to replicate the problem. So we’re assuming something got disconnected.
The experiment happened outside of the lab, in a room we had never been in and it was tiny. It wasn’t ideal and the fact that some of the other equipment turned out to be connected wrong, which threw off a lot of what we were already trying, didn’t help. So at the end of the day, we learn from it and plan for the next one. The next one being tomorrow.
That’s right, tomorrow if all goes well we’ll get our second shot at collecting data for this project. I am both nervous and excited. I know exactly what I need to see in the data for me to know we’re actually recording something, but I ignored my instinct last time when I didn’t see it and assumed something else was going on. So this time around, if I don’t find what I expect, I won’t be continuing until we get that “thing.”
In EEG we can’t really tell if the equipment is working properly, so we rely on artifacts to let us know. In that case we have people move their eyes around or blink a lot. This will cause a large change in the data that we can relate to the movement. If someone blinks in rapid succession a few times you’ll be able to see the shift in the data because it’s so visible. Once we see that, we just trust that everything else is going according to plan because we know we’re actually recording something.
I didn’t see the changes I was looking for, but since I was using different software, I assumed that was the cause. Since I know now that we weren’t recording, I know that I should rely more on my experience and just trust that even though the software I’m using for this experiment isn’t what I would normally use, it should have the same behavior.
As an aside to explain, the software we use does apply a few filters (to remove noise), I assumed prior that one of the filters it applies was removing the shift in the data that I would’ve expected. Our normal software corrects for this as well, but it does it on a window basis, meaning there is a sliding bar and everything prior to that bar is the new data and everything after is the previous data (think pacman, when you exit the right side of the screen you reappear on the left side). But the software we’re using doesn’t do this, the data shifts to the left of the screen so it’s slightly disorienting depending on what you’re used to using.
Long story short (haha, right), I now am certain that I should see these artifacts in our data when we’re recording and if I don’t find them next time, it means there’s a problem with the data somewhere that I need to address. Knowing this means we won’t have the same problem twice, but we still have a lot to learn. I’m keeping my fingers crossed that this time goes well, or at the very least, we get the data we’re hoping for! I need to demonstrate to hospital-PI this will work before I can do some of the really cool stuff I have planned.
More good news!
Remember that EKG sensor board I was building? Well, I can tell when the leads are floating, but that’s about it. So the question becomes is it the electrodes, a bad sensitivity, a bad analog filter, or simply poor component selection. Board’s not proven good, either!
At least identifying if there is a problem during a run is invaluable.
I talk a lot, but story from MSOE. Well, a couplet of stories. I once got a problem completely right, but I ran out of page space and my work was disorganized. I got docked 2% for “wasting [her] time.” Fair or not, I once got a problem completely wrong, like the moon was six inches from the Earth wrong. I noted “This is clearly wrong, but I don’t know why.” Simple math error, the professor found it immediately. Two points off.
Process matters, and knowing when something’s wrong is almost more important than the output itself. Anyone can use a calculator, but knowing 7 * 8 does not equal “Low Battery” is a useful skill.
LikeLiked by 2 people
May 15, 2022 at 12:25 pm
At my college at least, it was pretty traditional for professors to weight demonstrated knowledge of the correct solution path more highly than a correct final answer. Simple math/algebra errors or other brain farts only earned a few points off. (Which was good, because those were the primary things I lost points to.)
LikeLiked by 2 people
May 15, 2022 at 3:17 pm
I just remembered a professor once offered extra credit if someone brought in mathematical proof of a retroreflector. A good engineer would have used three dimensional vector mechanics. I used 2d geometry to try to prove it. It didn’t work. I asked for guidance and he said, “This is wrong, but you’re the only person who tried, so you get the credit.”
I think that’s a better lesson than how a retroreflector works.
LikeLiked by 2 people
May 15, 2022 at 6:01 pm
I like that kind of attitude! It encourages learning instead of enforcing the fear of being incorrect.
LikeLiked by 1 person
May 16, 2022 at 7:57 pm
I had a similar experience. Sometimes just showing you attempted the problem was enough to earn some points. Although every once and awhile I have come across professors who only cared if the answer was correct. I hated it and I felt like it only encouraged people to cheat instead of attempting the problem.
LikeLiked by 1 person
May 16, 2022 at 7:55 pm
Thank you for sharing that story! I always felt that grading should be handled that way. Knowing you didn’t get the right answer is just as important as knowing how to get it right.
LikeLiked by 1 person
May 16, 2022 at 7:53 pm