Big claims and big evidence
In science, ideally, when you make a large claim, you need a lot of evidence to support it. In theory anyway, in practice with the speed of the internet, claims often get taken as truth no matter how self correcting later. The claim that vaccines cause autism for example has been thoroughly debunked over and over, but the claim still persists despite the piles of evidence to the contrary. Global warming is another good example of how having a lot of evidence doesn’t mean acceptance.
A lot of these issues spawn from money. Wakefield for example was paid a ton of money from to make the claim that vaccines cause autism, specifically the MMR vaccine. Technically he was paid to link the MMR vaccine with SOMETHING, anything negative and autism was the low hanging fruit so to speak since we don’t really understand what causes autism. He then went on to produce a MMR vaccine that he claimed wouldn’t cause autism and that is what we call a conflict of interest and he never disclosed it. The study itself was incredibly flawed and was literally documenting child abuse, but for whatever reason people still latch on to his claims.
On the other end of the spectrum is the global warming issue, or renamed climate change because people seem to conflate weather with climate. We have more evidence to prove man-made global warming than we do to show that smoking causes cancer, which mind you, is a very sustancial link. Yet we have people with a lot of money trying to confuse people into thinking that the science isn’t settled. The truth is nothing is ever 100% confidence because statistics doesn’t allow for that, but when we’re 98% certain that this is true, that is serious.
I bring up these two examples because they are both very mature research areas. We’ve known about man-made climate change for generations now. But these examples highlight why it’s so stressful to make claims as a scientist, because you only have a few possible outcomes if it goes mainstream. The first is you’re correct, it turns out to be true, and people accept it. That’s the best case. The second is you’re correct, but people don’t want to believe it because it goes against their deeply held beliefs, this is bad. In my opinion the worst case is the third, which you’re not correct and people accept it as truth. That is potentially dangerous.
Science needs to move fast sometimes, when COVID hit scientists tried throwing a lot of different things at the problem to try to slow it down or stop it. We had the hydroxychloroquine push, then the Ivermectin push. I’m pretty sure we have people drinking urine or something now too… I have no words, but please don’t drink your urine, ever. For at least the first two (don’t ask me why people are obsessed with drinking urine or shoving things in their asses for health, I don’t get it) the misconception that these things work to treat COVID came exactly because we try to move fast.
In both cases, hydroxychloroquine and Ivermectin, were identified as possible treatments for COVID. However, with big claims comes the need for a lot of evidence (“big” evidence), which as more evidence came out it was shown this wasn’t true. Neither improved outcomes and in some instances it was associated with worse outcomes. More importantly both of these drugs can cause side effects and make treatment more difficult so you’re not only giving something to someone that does absolutely nothing to solve the problem, you’re making it harder to treat the person properly.
That’s why latching onto claims can be dangerous, but also as a researcher we need to be cautious about the claims we make. COVID was a good example of some researchers seeing an opportunity to have a high impact paper with minimal effort because everything regarding COVID was being cited by everyone else doing work to solve the crisis. COVID is certainly not the only example, but it is probably the most recent example of this effect.
Which brings me to the connection with my research. A few days ago I discussed being a “trailblazer” or being the first (here). I’m excited and I know for almost certain one of the things I’m doing will work, again trying to be a good scientist and not say 100%, but it feels like there’s a 98% chance of success and doing something for the first time that will be very cool. It’s the other thing I’m concerned with because it’s controversial among my peers and while we have had initial success with the technique I’m proposing, it’s a big claim.
At this point I could, theoretically, talk all I want about it and what I’m doing. In theory. I doubt it will get latched onto by the mainstream or anything, but I also don’t want to spread anything that would give people hope that this technique could help them without thoroughly proving it actually works. So far I’ve had two n = 1 experiments to determine if my “super secret technique” (SST) will actually work. Both times things have looked positive, which gives me hope. It’s also been funded so others think that the evidence I’ve collected is strong enough to give me a huge chunk of money to double check. It’s high risk, but high reward.
I’m feeling the pressure particularly hard lately since DARPA decided that what I was doing was worth highlighting… maybe. I was nominated, but I THINK there’s still one more step before I get to do a poster presentation and one step after that to be selected for a five minute talk on SST. They didn’t give me a timeline when I would hear back, but the local event will be held in November (I think…). That is enough time to collect all the data I need to show that this works, but it also means I could get selected and I would end up presenting on failure.
You can imagine how great it would feel to be selected to present to some high ranking government officials only to showcase how you tried something and failed at it. I’m not sure embarrassed would even begin to cover how I would feel.
The good news is that things are moving forward and I should have a final (or mostly final) answer on how far we can push SST and what kind of information we would get from it. So far things are looking pretty positive, we’ve had two seeming successes with the technique and that bodes well for the future experiments, but it would be nice to have a definite answer before people start talking about me or trying to reward me.
Today I’m doing another dive into the data I’ve collected already to see how repeatable my result is WITHIN the two subjects I’ve collected the data from. This will help in general before I start collecting more data so I know what to expect, but will also make me feel better. Or at least it will make me feel better if the variance between trials is nice and low.
I’m planning on a meeting with hospital-PI Monday to discuss the results. He is slowly coming to the SST side and he was the biggest critic of the technique, but as I try to remind him, you don’t know until you look. Making assumptions that something won’t work is a whole lot different than showing it. Hopefully it will work. If not I find some minor comfort that at least my dissertation will forever be a warning to future students/researchers who stumble upon SST like I happened to do.
Because there’s no journal of null results, even though there really should be…