Discussion about this post

User's avatar
David AuBuchon's avatar

- Or we might find that impact factor means squat.

- Don't forget this novel funnel plot use could itself have publication bias. What journals reject more trials that have negative results? High-impact or low-impact ones? This study might also find something about that. Or the results might also be thrown by the publication bias.

- Maybe also do this same type of study, but instead of doing 100 funnel plots, each representing 1 meta-analysis, do just 1 funnel plot that has 100 meta-analyses each on the same age-old question.

- Or maybe do it, but only examining efficacy of placebos. A kind of negative control.

- Also journal impact factor may not be the best measure. May want to in addition look at a standardized measure of the impact of the specific papers in question. The metric would have to adjust for year of publication being more or less "poppin'". One might find outliers, like the most inflammatory or the most inaccurate studies get shared the most. There might a a goldilocks zone. If you found that zone you could do the original study on journal impact factor all over again, but restricting analysis to studies that were in that goldilocks zone. Or do the same thing in the reverse order.

- Or maybe we may learn more about how we suck at interpreting funnel plots:

https://pubmed.ncbi.nlm.nih.gov/10812319/

https://pubmed.ncbi.nlm.nih.gov/16085192/

- Need low-impact due to gatekeeping. How many doctors know some RCT says black seed oil cures 40% of kidney stones? Or that a whole-foods plant-based diet can remit diabetic neuropathy? Or an RCT says melatonin cut covid mortality 90% in some hospital? etc etc etc.

- Lastly, if you find that low-impact journals have higher quality, make sure to submit your results to a high-impact journal so you can get rejected and publish low-impact. "Study in low-impact journal claims low-impact journals are better." Or conversely, "Study in high-impact journal says low-impact journals are better. Journal's impact factor soars."

It's 4am...hope this is still coherent in the morning.

Expand full comment
Michael DAmbrosio's avatar

If I recall correctly a similar proposal was made in Stuart Ritchies' "Science Fictions" which offered ideas to "right the ship" of the replication crisis.

On this argument you propose....:

"I’d hypothesize that if we attached some measure of journal quality (probably the impact factor) to each point (study) on the original funnel plot we would find that the higher quality journals routinely publish studies that fill the pipette of truth while lower quality journals routinely publish articles whose results fill the colander of distraction"

Can we trust "higher quality journals" when even the NEJM publishes obvious nonsense like "Lifting Universal Masking in Schools — Covid-19 Incidence among Students and Staff"?

https://www.nejm.org/doi/full/10.1056/NEJMoa2211029

Quick, obvious issues with the Boston study:

1) Figure 1 shows that the students in the schools which would eventually lift their mask mandates had much higher cases before the mandates were lifted, indicating that whatever caused these districts to have higher cases was happening before masks removed. Yet the authors cut this off by starting the graph in figure 1 in February, though it is clear cases were much higher in January. Classic technique of data drudging.

2) The authors apparently didn't realize that 13 of the schools they counted as "keep mandate" had successfully received an exception earlier (there was a condition that if you meet a % of vaccination you could be exempt from mask mandate).

You can cross reference this list with table S1 to see the 13 schools they missed:

https://www.cbsnews.com/boston/news/massachusetts-schools-mask-mandate-lifted-list-dese/

Example of one of the schools lifting it: https://www.kingphilip.org/important-mask-update-2/

3) One of the authors, when questioned on the lack of accounting for testing differences (many schools had students taking twice a week antigen tests, other schools used the CDC guidance that you only need to test for exposure when not wearing masks) argued that you should just trust her, because she has a PhD.

(edit: forgot to add source: https://twitter.com/EpiEllie/status/1557497452781096960?s=20&t=20X-EaQtKJAw3a0mwTzSTg)

4) The authors successfully organized a successful Change.Org campaign to get masks back on kids in Boston earlier that year, yet make no mention of this conflict of interest in their disclosures

https://twitter.com/EpiEllie/status/1429102872470433795

5) One of the authors had penned the Op-Ed in the Boston Globe "It's too soon to lift the school mask mandate", also didn't disclose this conflict of interest

https://www.bostonglobe.com/2022/02/11/opinion/its-too-soon-lift-school-mask-mandate/

6) Just as aside, almost all of the authors are on record being supportive of masking children prior to the study. Is it any surprise that they would be able to find high efficacy using one of the lowest tiers of evidence?

____________________

If the NEJM can publish nonsense like this, are there perhaps bigger problems to address before fixing meta research bias?

Expand full comment
37 more comments...

No posts