Believe it or not you just don’t enter a career and suddenly have 500 publications. You build from the start of your scientific endeavors throughout your career often having to balance some clinical obligations or other obligations along the way.
And I agree with the comment above that just number of publications does not reflect quality of work . As one who has submitted grant applications, been awarded grant applications, and currently serve as a consultant on funded grants. I admit there are many hazards and trip wires along the way. But there are also many dedicated professionals trying to enact the mission of the section they are in. Applications are supposed to be novel and innovative. They are supposed to be justified and justifiable.
You do have to show that you have the support and personnel to accomplish what you are proposing to do. There’s a formula for writing these grants and it’s taught in many courses available at any university. These are not courses offered by the university, but they are specialists brought in to teach people how to write grants.
There is a sense among scientists that only the rich get richer -that is only those that are funded continue to get funding.That doesn’t leave a lot of room for bright new people coming up who don’t yet have a strong trail of publications. Or know people on the study sections. There’s definitely a club membership feel to the process.
A very short pause in funding might give a universal slap upside the head to everyone involved that they need to be on their game, that the process needs to be fair, and that the money is given be spent wisely and as outlined in the budget of the grants. Did you know that the home institutions add/get anywhere from a 40 to 60% overhead fee to the total of the grant funds requested? That’s why there are so many careers on the line right now.
How does one determine the best metrics for successful outcomes? Is number of citations enough? Also, I'm thinking you'd agree with some screening for study design (the questions about a project are likely: is it worth doing, will this study actually produce a measurement of what it says it is trying to measure and is this the right person to be doing it....do you want to randomize on 1 and 3 perhaps)?
Thanks!
[sigh..I see someone said this earlier and better, below :)]
Interesting ideas. There is probably much that could be done to simplfy the application process - that seems like something everyone would like. Reducing multiple awards also seems like a good reform.
The modified lottery ideas is also interesting. I have served on admissions committees of highly selective institutions, and I have long thought that, once applications were culled to the top 20%, you could probably randomly select the top 5% from this group and produce a great class. It might even reduce bias. We spent a lot of time agonizing about that final selection process. A lottery of the top quintile would probably work as well and save everyone a lot of time and effort. Might be the same for NIH grants.
Nothing short of getting government completely out of the system will work. Private organizations have to answer to their donors and will better monitor productivity. Virtually unlimited access to taxpayer money will guarantee that the politicians and research grifters will continue the bureaucratic old boy networks that we have now.
Just my two cents. What is the purpose of NIH grant funding? Is there a stated mission? Are the study sections living up to that mission? Are these grant submission blinded? How blinded are they? I think there needs to be a balance between trainees (fellows) applying for grants, young faculty and established faculty. I do think the grant process needs to be competitive, but the scrutiny applied should increase as the researcher progresses. Thanks for listening to the old retired pharmacist!
Considering that this is Sensible Medicine, I’ve been a bit surprised by the lack of balance with respect to the discussion of NIH funding mechanisms. In many ways they read as if grievances are being aired against study section reviewers who didn’t happen to like your grants.
As NIH funding is relatively mysterious to those who don’t apply for federal grants, phrases such as “Ideally, they pick people who have no idea what you are doing or why it is important, and are not as successful as you, so they can hate read your proposal.” might be assumed to be semi-serious by a casual reader. The goal, of course, is to have experts read your grants on NIH study sections, but this is achieved with various degrees of success.
Yes, NIH study sections and funding mechanisms are mercurial and inefficient. Yes, the expertise of the panels could always be improved. Yes, a randomized assessment of funding approaches is a good idea. But we need to be careful about what we consider to be a successful or impactful grant, and there are a couple of assertions here that are fairly easy to take issue with. For example, the idea that you are “mediocre” as a scientist unless you have a paper with at least 1,000 citations is complete nonsense. This number is entirely arbitrary and does not translate over all (most?) areas of research. In fact, if you want more innovative thinking in the NIH, stacking study sections with establishment scientists who have hung around long enough, and learned to play the game well enough, to have lots of papers with over 1,000 citations might be exactly the opposite of what’s needed.
I work in neonatal neuroprotection, developing therapies for babies with various kinds of brain injury. The number of serious labs doing preclinical (animal) work to support and inform future clinical trials can be counted in the 10s worldwide. Despite this, the field developed what I would argue is the most recent major therapeutic advancement in neurology – therapeutic hypothermia (TH) for newborn infants with hypoxic-ischemic encephalopathy (HIE). TH for HIE was added to resuscitation guidelines in 2010 and is incredibly effective – the NNT to prevent death or major disability is 7. The study in piglets that established the mechanism has been cited 495 times according to Google Scholar (Thoresen et al., Pediatric Research 1995). The study in sheep that developed the TH protocol now used clinically (Gunn et al., JCI 1997) has been cited 732 times. These papers formed the basis of a therapy that will save millions of lives, but there simply aren’t enough people in the field for them to hit an arbitrary number of 1,000 citations so that they can suddenly be considered impactful work by “good” scientists.
Suggesting that a grant is only successful if it results in a large number of citations or publications is equally problematic. I could highlight evidence that scientists who have more papers, more citations, and publish in higher impact journals are, on average, more likely to have their work retracted and therefore suggest that they are “worse” scientists. Of course, we know this isn’t true.
The NIH could absolutely use some reform, but sticking with these outdated metrics of “success” as the goal will only decrease the quality of the output as people try to crank out more papers and get more citations. Instead, there should be field-specific assessments of success, for instance translation of therapies from preclinical to clinical trials and, ideally, successful clinical trials that have meaningful outcome metrics.
Would you describe the appointment of Dr. Jay Battacharya to head the NIH as being, "anti-science"? If anything, the former leader of NIAID Dr. Fauci, recently pardoned by the past administration for crimes he hasn't even been formally charged with, was rabidly anti-science.
Let’s hope Dr B can make changes. Bless him for trying. Personally, I don’t know if I could work with the admin. Number one rule in medicine – don’t trust administration.
This is not all true and overly harsh.
Believe it or not you just don’t enter a career and suddenly have 500 publications. You build from the start of your scientific endeavors throughout your career often having to balance some clinical obligations or other obligations along the way.
And I agree with the comment above that just number of publications does not reflect quality of work . As one who has submitted grant applications, been awarded grant applications, and currently serve as a consultant on funded grants. I admit there are many hazards and trip wires along the way. But there are also many dedicated professionals trying to enact the mission of the section they are in. Applications are supposed to be novel and innovative. They are supposed to be justified and justifiable.
You do have to show that you have the support and personnel to accomplish what you are proposing to do. There’s a formula for writing these grants and it’s taught in many courses available at any university. These are not courses offered by the university, but they are specialists brought in to teach people how to write grants.
There is a sense among scientists that only the rich get richer -that is only those that are funded continue to get funding.That doesn’t leave a lot of room for bright new people coming up who don’t yet have a strong trail of publications. Or know people on the study sections. There’s definitely a club membership feel to the process.
A very short pause in funding might give a universal slap upside the head to everyone involved that they need to be on their game, that the process needs to be fair, and that the money is given be spent wisely and as outlined in the budget of the grants. Did you know that the home institutions add/get anywhere from a 40 to 60% overhead fee to the total of the grant funds requested? That’s why there are so many careers on the line right now.
How does one determine the best metrics for successful outcomes? Is number of citations enough? Also, I'm thinking you'd agree with some screening for study design (the questions about a project are likely: is it worth doing, will this study actually produce a measurement of what it says it is trying to measure and is this the right person to be doing it....do you want to randomize on 1 and 3 perhaps)?
Thanks!
[sigh..I see someone said this earlier and better, below :)]
Interesting ideas. There is probably much that could be done to simplfy the application process - that seems like something everyone would like. Reducing multiple awards also seems like a good reform.
The modified lottery ideas is also interesting. I have served on admissions committees of highly selective institutions, and I have long thought that, once applications were culled to the top 20%, you could probably randomly select the top 5% from this group and produce a great class. It might even reduce bias. We spent a lot of time agonizing about that final selection process. A lottery of the top quintile would probably work as well and save everyone a lot of time and effort. Might be the same for NIH grants.
How about other countries that fund studies? What are their methods for doing so, and are any worthy of trying?
Nothing short of getting government completely out of the system will work. Private organizations have to answer to their donors and will better monitor productivity. Virtually unlimited access to taxpayer money will guarantee that the politicians and research grifters will continue the bureaucratic old boy networks that we have now.
Just my two cents. What is the purpose of NIH grant funding? Is there a stated mission? Are the study sections living up to that mission? Are these grant submission blinded? How blinded are they? I think there needs to be a balance between trainees (fellows) applying for grants, young faculty and established faculty. I do think the grant process needs to be competitive, but the scrutiny applied should increase as the researcher progresses. Thanks for listening to the old retired pharmacist!
You are fearless and outrageous. Love your articles.
Considering that this is Sensible Medicine, I’ve been a bit surprised by the lack of balance with respect to the discussion of NIH funding mechanisms. In many ways they read as if grievances are being aired against study section reviewers who didn’t happen to like your grants.
As NIH funding is relatively mysterious to those who don’t apply for federal grants, phrases such as “Ideally, they pick people who have no idea what you are doing or why it is important, and are not as successful as you, so they can hate read your proposal.” might be assumed to be semi-serious by a casual reader. The goal, of course, is to have experts read your grants on NIH study sections, but this is achieved with various degrees of success.
Yes, NIH study sections and funding mechanisms are mercurial and inefficient. Yes, the expertise of the panels could always be improved. Yes, a randomized assessment of funding approaches is a good idea. But we need to be careful about what we consider to be a successful or impactful grant, and there are a couple of assertions here that are fairly easy to take issue with. For example, the idea that you are “mediocre” as a scientist unless you have a paper with at least 1,000 citations is complete nonsense. This number is entirely arbitrary and does not translate over all (most?) areas of research. In fact, if you want more innovative thinking in the NIH, stacking study sections with establishment scientists who have hung around long enough, and learned to play the game well enough, to have lots of papers with over 1,000 citations might be exactly the opposite of what’s needed.
I work in neonatal neuroprotection, developing therapies for babies with various kinds of brain injury. The number of serious labs doing preclinical (animal) work to support and inform future clinical trials can be counted in the 10s worldwide. Despite this, the field developed what I would argue is the most recent major therapeutic advancement in neurology – therapeutic hypothermia (TH) for newborn infants with hypoxic-ischemic encephalopathy (HIE). TH for HIE was added to resuscitation guidelines in 2010 and is incredibly effective – the NNT to prevent death or major disability is 7. The study in piglets that established the mechanism has been cited 495 times according to Google Scholar (Thoresen et al., Pediatric Research 1995). The study in sheep that developed the TH protocol now used clinically (Gunn et al., JCI 1997) has been cited 732 times. These papers formed the basis of a therapy that will save millions of lives, but there simply aren’t enough people in the field for them to hit an arbitrary number of 1,000 citations so that they can suddenly be considered impactful work by “good” scientists.
Suggesting that a grant is only successful if it results in a large number of citations or publications is equally problematic. I could highlight evidence that scientists who have more papers, more citations, and publish in higher impact journals are, on average, more likely to have their work retracted and therefore suggest that they are “worse” scientists. Of course, we know this isn’t true.
The NIH could absolutely use some reform, but sticking with these outdated metrics of “success” as the goal will only decrease the quality of the output as people try to crank out more papers and get more citations. Instead, there should be field-specific assessments of success, for instance translation of therapies from preclinical to clinical trials and, ideally, successful clinical trials that have meaningful outcome metrics.
A very long exposition. You have become an apologist for this renegade anti-science administration. They will take you all down. Buyer beware
Would you describe the appointment of Dr. Jay Battacharya to head the NIH as being, "anti-science"? If anything, the former leader of NIAID Dr. Fauci, recently pardoned by the past administration for crimes he hasn't even been formally charged with, was rabidly anti-science.
Let’s hope Dr B can make changes. Bless him for trying. Personally, I don’t know if I could work with the admin. Number one rule in medicine – don’t trust administration.