Discussion about this post

User's avatar
Tommy Wood, BM BCh (MD), PhD's avatar

Considering that this is Sensible Medicine, I’ve been a bit surprised by the lack of balance with respect to the discussion of NIH funding mechanisms. In many ways they read as if grievances are being aired against study section reviewers who didn’t happen to like your grants.

As NIH funding is relatively mysterious to those who don’t apply for federal grants, phrases such as “Ideally, they pick people who have no idea what you are doing or why it is important, and are not as successful as you, so they can hate read your proposal.” might be assumed to be semi-serious by a casual reader. The goal, of course, is to have experts read your grants on NIH study sections, but this is achieved with various degrees of success.

Yes, NIH study sections and funding mechanisms are mercurial and inefficient. Yes, the expertise of the panels could always be improved. Yes, a randomized assessment of funding approaches is a good idea. But we need to be careful about what we consider to be a successful or impactful grant, and there are a couple of assertions here that are fairly easy to take issue with. For example, the idea that you are “mediocre” as a scientist unless you have a paper with at least 1,000 citations is complete nonsense. This number is entirely arbitrary and does not translate over all (most?) areas of research. In fact, if you want more innovative thinking in the NIH, stacking study sections with establishment scientists who have hung around long enough, and learned to play the game well enough, to have lots of papers with over 1,000 citations might be exactly the opposite of what’s needed.

I work in neonatal neuroprotection, developing therapies for babies with various kinds of brain injury. The number of serious labs doing preclinical (animal) work to support and inform future clinical trials can be counted in the 10s worldwide. Despite this, the field developed what I would argue is the most recent major therapeutic advancement in neurology – therapeutic hypothermia (TH) for newborn infants with hypoxic-ischemic encephalopathy (HIE). TH for HIE was added to resuscitation guidelines in 2010 and is incredibly effective – the NNT to prevent death or major disability is 7. The study in piglets that established the mechanism has been cited 495 times according to Google Scholar (Thoresen et al., Pediatric Research 1995). The study in sheep that developed the TH protocol now used clinically (Gunn et al., JCI 1997) has been cited 732 times. These papers formed the basis of a therapy that will save millions of lives, but there simply aren’t enough people in the field for them to hit an arbitrary number of 1,000 citations so that they can suddenly be considered impactful work by “good” scientists.

Suggesting that a grant is only successful if it results in a large number of citations or publications is equally problematic. I could highlight evidence that scientists who have more papers, more citations, and publish in higher impact journals are, on average, more likely to have their work retracted and therefore suggest that they are “worse” scientists. Of course, we know this isn’t true.

The NIH could absolutely use some reform, but sticking with these outdated metrics of “success” as the goal will only decrease the quality of the output as people try to crank out more papers and get more citations. Instead, there should be field-specific assessments of success, for instance translation of therapies from preclinical to clinical trials and, ideally, successful clinical trials that have meaningful outcome metrics.

Expand full comment
Philip Miller's avatar

A very long exposition. You have become an apologist for this renegade anti-science administration. They will take you all down. Buyer beware

Expand full comment
9 more comments...

No posts