The big rush

A while ago I had an idea, a “big idea,” but the second I had that idea the clock was ticking. Because ideas are not that unique. There is a gap in research and we try to fill that gap when we see it. I noticed a gap and because I noticed it, there’s no reason others won’t notice it as well. Now we’re in an unseen race to publish and there’s still some speed bumps in the way that are causing some issues. I’m hopeful we can get there first, but I wouldn’t be surprised if we had competition.
The thing about research is we’re always racing to be the first to do something, because IMPACT! It’s a horrible system where we literally earn points the more we publish and people notice us. We have total number of citations, h-index, and a whole lot of other things to consider. To help us on our quest to level up our research journals provide us with metrics to help us, impact factor for one tells me how many times I could (on average) expect to be cited if I publish something in that perspective journal.
It’s a horrible system where you need to have a high score or you get ignored by funding agencies, jobs, and just about everything else that you could want in your research career. My score is low, which is not uncommon for an early career researcher, but it also is expected to be higher. So far I’ve published two papers in the last six months or so with one more coming out soon (here), so I’ve made strides to correct this, but even still there is a delay between publishing and being cited.
Ideally this year will be a good year for my made up score that defines who I am and my value as a researcher. A good way to make a big splash is to do some novel work and the two projects I’ve come up with are incredibly novel, both “super secret technique” (SST) and “big idea” (BI) will (in my mind anyway) get people interested in who I am. I wouldn’t call them groundbreaking or life changing, but they both stand to change the way we do things in a very narrow subset (spinal cord injury, prosthetics, brain-machine interfaces, things like that).
Which is huge for me, but I’m not the one who gets to decide, it’s other researchers. The problem is being the first to something. Why am I talking about big idea and not SST? Well, so far I’ve found two instances of researchers doing something very close to BI, one of them I stumbled upon yesterday and it was even closer than the previous instance. So close it makes me nervous that we’re not doing anything on that front yet! They are very close to something that I want to do that I am
I have a few things working in my favor, BI has been my baby for a long time now, but it only was recently fully formed when I came up with the one thing that would make it work. So I know better than most how to deal with the data coming from it. I’ve already read through the two examples and both are not great.
Frankly, I have a better way of doing it and I’m excited to try my hand at it, but that won’t happen until the hospital approves our IRB for the project, which should be occurring soon?! I hope, I’ve already had one false alarm for the data collection (here), but I’m hopeful that we’ll get approval shortly. We also have better resources than the other two examples, plus they were more proof of concept and mine is definitely not.
With two papers that are somewhat close, I don’t know that I have a lot of time to get to the publishing phase of things. The problem with not being the first is that someone else will be known for what I’m trying to do. I’m not looking to be famous or anything, but in a world that judges me by my completely arbitrary number, I would like to have something a bit higher than I do now. That way, when the time comes for me to be a PI, I will be able to help people better.
How to measure achievement seems to be a perennial problem. Any indicator that you try to use will tend to become a goal in its own right. I was familiar with standardized tests as an example of this effect: the test is supposed to be a detector for learning or mastery of a larger body of material. But then schools start “teaching to the test,” and eventually, the only skill the test can detect is the skill of passing the test.
At first glance, I would think that number-of-citations would be an okay measure of research quality – because if lots of people are citing your work, it’s probably useful, right? But if (as you explained in a previous post) researchers are basically doing PR to get their number of citations up, or choosing the journal that will get them cited the most rather than the one most topical for their work, then it sounds like the measurement method has stolen the show from the thing it’s supposed to measure. People are wasting time competing for the biggest number when they could be getting more research done.
I didn’t know this particular problem had infected higher academia, and it’s disappointing to find out. I’m curious if you have any ideas for a better way to rate research, or is it perhaps not really possible?
LikeLiked by 1 person
April 8, 2022 at 9:02 pm
Personally I think all well done research has value, so I don’t think using metrics like popularity (number of citations) is a good way to score it. Unfortunately there isn’t a good solution. Some organizations require large numbers of publications AND high citation numbers, others just want publications in high impact journals. It’s just tough really, but it effects funding for sure, I’ve seen it happen. It also affects job offers and prospects in research/academia. So for me specifically, if my number is low I could get passed over for grants or jobs simply because no one is citing my work.
To me the problem is that people get sucked into the citation circle so everyone cites particular papers because others are citing the same papers. We had a person in our lab who got pulled in for whatever reason and his one paper has like 300+ citations now? I mentioned it in a previous post I think.
The best solution is probably the hardest and that would be that hiring agencies and/or funding agencies actually read what people are publishing and look at the quality of work being done. But that would be a lot more work than looking at a number, so it probably wouldn’t happen despite being (what I think is) the best indicator of quality.
Sometimes people don’t get cited because other researchers can’t find the work, that’s why selling yourself is so important. I’m probably terrible at it. I went to a conference right before starting my PhD (before the COVID times!!) and a colleague who was interested in continuing my work said he couldn’t find me on the internet, so not a great sign.
Then again, it could just be me and my research. It may be too niche (robot paper for example is for a very small subset of robotics/prosthetics researchers), it may be too new (like my recent publications), or it may be that it’s not very good compared to others in my field (sad thought, but it’s an honest consideration).
LikeLiked by 1 person
April 9, 2022 at 11:49 am