The results could force journals to cease publishing. The physician-researchers who primarily publish in these journals could spend more time improving medical care by seeing patients.
One more consideration came to my mind. I would not for example believe that this study works with antidepressants. All the antidepressant studies say there's a small and arguably clinically insignificant effect. The problem is it's all due to bias and antidepressants don't actually work. There's unblinding bias, publication bias, sponsorship bias, something called the telephone game bias, cold turkey bias, short trial duration bias which stops before the median time of natural remission of untreated depression. Then there's the fact that the non-responder subgroup actually have their depression scores do worse than placebo. It goes on. Basically it's an anti-science field and yet the studies have a consensus. I would actually believe the studies that are outliers and not the ones that align. So the presumption of this kind of study design is that we actually have a tendency towards quality to begin with which I am afraid may oftentimes be untrue
Scanned over the BGC contract withe DoD. Thanks to Sasha, we're all getting a much clearer picture as to where thus is all going to lead: back to the psychopaths at the DoD.
Very cool notion, but equal to assigning some weighting factor to results. But given how many publications seem already captured by pHarma, what does that mean?
I see that the NIH requiring those funded to do clinical studies to report might very well force out the studies that showed no effect from the studied thing and might go a long way. All results should see the light of day and not require conversations (off-the-record) among researchers.
I am reminded of the studies that went out of their way to ignore recommended doses and processes to defame IVM and HCQ. That discounts those studies that were purposely fraudulent. Such fraudulent studies when discovered ought to get a full investigation revealing the why.
#1. This spirit will move us more in a direction of curious exploration of better practice - Yay!
#2. where's iii ?
#3. To answer questions more "forest for the trees" and "how could the current mess be functional?" I am pondering not the content of "The Science", or even process of procuring and producing "The Science", but our relationship with "science"...any thoughts?
Adam I think your article was great and David AuBuchon’s comment had me burst out laughing, especially the last paragraph.
Regarding your modified funnel plot picture I can only wonder if in prior drafts of this post you may have used pictures of common hospital devices that would symbolize flushing out the truth and collecting the “distractions”
This approach will work equally well if applied to University departments, and specific labs and primary researchers. We know that much of specific research associated with economically profitable and politically sensitive is false. Why not simply and easily eliminate innate the actors? Simply because they are the large, favored, and major research universities in the country?
I always appreciate your views. One other type of analysis I’d LOVE to see is results from studies that were funded by pharma vs NIH vs independent - forgive my ignorance on who else funds studies, but is there anything showing “financial compensation bias” that isn’t just, oh I don’t know, cooler talk?
It is possible. But they have a very large endowment. Their web site states: "Wellcome generally does not receive donations or government grants. We don’t raise money from the public. " https://wellcome.org/who-we-are/investments
First and foremost it sounds like the comparison between NY Times to the National Enquirer. I would choose the NYT because of the writers, reputation, information that fits my needs. I wouldn't pick up the NE because of their reputation, writers, information. So, choosing the reputable doctors researching and testing resourceful and useful information that affects most physicians would be in the NYT, which is the go-to literature for me. I'm sure that all of your physicians already know what is truth and what is bunk.
Who is your audience? Medical students in their 3rd year or the country physician serving 5 counties because he is the only practitioner. A new resident or the PCP who has delivered 3 generations of babies. My presentation, my subject of study, my presentation would widely vary between those sets of people. A bar or line chart maybe including some bubble data would be good for the country doc and PCP. Whereas I'd use an Excel Advanced chart and graph in a Powerpoint presentation for the others. This is conjecture and I am not pigeonholing anyone or being prejudicial, because I put myself in the first category of a simple bar graph and good presenter. Because I'm older I can assimilate and understand information when presented in a familiar manner. I prefer writing as opposed to point & click. I prefer counting IV drops per minute rather than running it through a machine that counts for me. Etc.
The study producing data is what is important. The presentation should not be one size fits all.
1 - Find the most crucial need for a study
2 - Use the most reputable and truth seeking physicians to perform the trials
3 - Publish the outcomes in a format and journal pertinent to whom this study affects.
Above and beyond that anyone outside of your assumed interested group may pick it up and use it, and may or may not get it but you don't have to reinvent the wheel.
Of course, not being a physician I could have read your entire paper incorrectly leaving you all with just a nondescript, few paragraphs of nothingness.
IMHO, The NY Times is no different than the supposedly respected medical journals, in that they also choose to print subjects with conclusions supporting the mandated protocol, slanted so far as to turn over backwards. Not only do they appear to discriminate, but they remove already accepted and published studies on false charges or the clamor of the mob. Credibility is determined by the quality of the information, not the ‘respected names of authors’ who in many cases have shown themselves to have been compromised (high profile studies funded by grants that come with unspoken strings attached). Often the conflict of interest statements show no conflict or are not detailed enough (lying by omission). Will stop now, could go on, sorry.
Understood and agree. I could not come up with a simile for truthful and respectful content vs contrived, slanted, biased content. My initial thought was Playboy vs Penthouse.
Trying to get there with that analogy. Maybe those publications could be contrasted as stylized vs sensational. Each has a market, but neither of them are professing to be a source of scientific research (I think). Interesting, though, but can’t quite get it to fit. Enjoying the thought process, thank you.
If I recall correctly a similar proposal was made in Stuart Ritchies' "Science Fictions" which offered ideas to "right the ship" of the replication crisis.
On this argument you propose....:
"I’d hypothesize that if we attached some measure of journal quality (probably the impact factor) to each point (study) on the original funnel plot we would find that the higher quality journals routinely publish studies that fill the pipette of truth while lower quality journals routinely publish articles whose results fill the colander of distraction"
Can we trust "higher quality journals" when even the NEJM publishes obvious nonsense like "Lifting Universal Masking in Schools — Covid-19 Incidence among Students and Staff"?
1) Figure 1 shows that the students in the schools which would eventually lift their mask mandates had much higher cases before the mandates were lifted, indicating that whatever caused these districts to have higher cases was happening before masks removed. Yet the authors cut this off by starting the graph in figure 1 in February, though it is clear cases were much higher in January. Classic technique of data drudging.
2) The authors apparently didn't realize that 13 of the schools they counted as "keep mandate" had successfully received an exception earlier (there was a condition that if you meet a % of vaccination you could be exempt from mask mandate).
You can cross reference this list with table S1 to see the 13 schools they missed:
3) One of the authors, when questioned on the lack of accounting for testing differences (many schools had students taking twice a week antigen tests, other schools used the CDC guidance that you only need to test for exposure when not wearing masks) argued that you should just trust her, because she has a PhD.
4) The authors successfully organized a successful Change.Org campaign to get masks back on kids in Boston earlier that year, yet make no mention of this conflict of interest in their disclosures
5) One of the authors had penned the Op-Ed in the Boston Globe "It's too soon to lift the school mask mandate", also didn't disclose this conflict of interest
6) Just as aside, almost all of the authors are on record being supportive of masking children prior to the study. Is it any surprise that they would be able to find high efficacy using one of the lowest tiers of evidence?
____________________
If the NEJM can publish nonsense like this, are there perhaps bigger problems to address before fixing meta research bias?
When it comes to current Truth(TM), the once-venerable journals (NEJM, Science, JAMA, etc.) have become as trashy as the least-good. This analysis is well done and I can give you similar takedowns (that should have been obvious to ANY reviewer without a propaganda agenda) to many articles in "the best" journals.
It was not always this way. I have not always agreed with the NEJM's editorial stance on things but for a long time they and their peers at least appeared to try to be rigorous. That went out the window with the TDS and the current administration -- now things only publish that are consonant with the narrative.
The funniest articles are actually those where the data (especially the "real" data in the tables) clearly shows one thing, but the conclusion/abstract is always (you can guarantee it without looking) "masks work" or "Vaccines are great". It has, sadly, become funny...and not funny good.
I agree. My PCP asked if I had gotten a covalent booster; I explained I was burned out on em at the moment - when not inappropriately pressed, I just let her know I didn’t want to be part of the “vaccine trial” and she gave me an understanding and knowing look and we moved along. I remember hearing about some pretty good evidence at one time for natural immunity and I trust those data.
In my brain, the inspiration to turn the tables, break the pattern and start reporting on studies that evaluate the media with numbers, and no adjectives, would be one of the most effective ways to raise this populations spirits. A dose of their own proverbial medicine.
Aside from the spite laced last sentence, behavioral modification by means of changing patterned behavior, has never had a better opportunity to correct a toxic cycle.
We need more scrappy young quarterbacks in the big game against integrity. If you're faced with a line of looming, defensive, cumbersome opponents, you have to scramble and change your pattern.
We had a look at trials reporting wound infection as a secondary out outcome and essentially its as accurate as guessing... unless it was the primary outcome of the trial.
So we need to either do better measurements of secondary outcomes or stop the effort of reporting them.
- Or we might find that impact factor means squat.
- Don't forget this novel funnel plot use could itself have publication bias. What journals reject more trials that have negative results? High-impact or low-impact ones? This study might also find something about that. Or the results might also be thrown by the publication bias.
- Maybe also do this same type of study, but instead of doing 100 funnel plots, each representing 1 meta-analysis, do just 1 funnel plot that has 100 meta-analyses each on the same age-old question.
- Or maybe do it, but only examining efficacy of placebos. A kind of negative control.
- Also journal impact factor may not be the best measure. May want to in addition look at a standardized measure of the impact of the specific papers in question. The metric would have to adjust for year of publication being more or less "poppin'". One might find outliers, like the most inflammatory or the most inaccurate studies get shared the most. There might a a goldilocks zone. If you found that zone you could do the original study on journal impact factor all over again, but restricting analysis to studies that were in that goldilocks zone. Or do the same thing in the reverse order.
- Or maybe we may learn more about how we suck at interpreting funnel plots:
- Need low-impact due to gatekeeping. How many doctors know some RCT says black seed oil cures 40% of kidney stones? Or that a whole-foods plant-based diet can remit diabetic neuropathy? Or an RCT says melatonin cut covid mortality 90% in some hospital? etc etc etc.
- Lastly, if you find that low-impact journals have higher quality, make sure to submit your results to a high-impact journal so you can get rejected and publish low-impact. "Study in low-impact journal claims low-impact journals are better." Or conversely, "Study in high-impact journal says low-impact journals are better. Journal's impact factor soars."
It's 4am...hope this is still coherent in the morning.
One more consideration came to my mind. I would not for example believe that this study works with antidepressants. All the antidepressant studies say there's a small and arguably clinically insignificant effect. The problem is it's all due to bias and antidepressants don't actually work. There's unblinding bias, publication bias, sponsorship bias, something called the telephone game bias, cold turkey bias, short trial duration bias which stops before the median time of natural remission of untreated depression. Then there's the fact that the non-responder subgroup actually have their depression scores do worse than placebo. It goes on. Basically it's an anti-science field and yet the studies have a consensus. I would actually believe the studies that are outliers and not the ones that align. So the presumption of this kind of study design is that we actually have a tendency towards quality to begin with which I am afraid may oftentimes be untrue
Scanned over the BGC contract withe DoD. Thanks to Sasha, we're all getting a much clearer picture as to where thus is all going to lead: back to the psychopaths at the DoD.
You should read Dr. Pierre Kory’s substack called “Medical Musings” regarding the collapse of reliable scientific studies‼️
Very cool notion, but equal to assigning some weighting factor to results. But given how many publications seem already captured by pHarma, what does that mean?
I see that the NIH requiring those funded to do clinical studies to report might very well force out the studies that showed no effect from the studied thing and might go a long way. All results should see the light of day and not require conversations (off-the-record) among researchers.
I am reminded of the studies that went out of their way to ignore recommended doses and processes to defame IVM and HCQ. That discounts those studies that were purposely fraudulent. Such fraudulent studies when discovered ought to get a full investigation revealing the why.
#1. This spirit will move us more in a direction of curious exploration of better practice - Yay!
#2. where's iii ?
#3. To answer questions more "forest for the trees" and "how could the current mess be functional?" I am pondering not the content of "The Science", or even process of procuring and producing "The Science", but our relationship with "science"...any thoughts?
Adam I think your article was great and David AuBuchon’s comment had me burst out laughing, especially the last paragraph.
Regarding your modified funnel plot picture I can only wonder if in prior drafts of this post you may have used pictures of common hospital devices that would symbolize flushing out the truth and collecting the “distractions”
This approach will work equally well if applied to University departments, and specific labs and primary researchers. We know that much of specific research associated with economically profitable and politically sensitive is false. Why not simply and easily eliminate innate the actors? Simply because they are the large, favored, and major research universities in the country?
Universities are Hollywood, generating new actors who are indoctrinated in script.
Fantastic!! This just made my day🤣
Super article, thank you. Where would the "Wisk of Wisdom" fit in?
I always appreciate your views. One other type of analysis I’d LOVE to see is results from studies that were funded by pharma vs NIH vs independent - forgive my ignorance on who else funds studies, but is there anything showing “financial compensation bias” that isn’t just, oh I don’t know, cooler talk?
Independents includes the Howard Hughes Medical Institute in the US and The Welcome Trust in the UK.
Doesn't the Welcome Trust get funding from the Gates bunch?
It is possible. But they have a very large endowment. Their web site states: "Wellcome generally does not receive donations or government grants. We don’t raise money from the public. " https://wellcome.org/who-we-are/investments
I love it! and the comments.
First and foremost it sounds like the comparison between NY Times to the National Enquirer. I would choose the NYT because of the writers, reputation, information that fits my needs. I wouldn't pick up the NE because of their reputation, writers, information. So, choosing the reputable doctors researching and testing resourceful and useful information that affects most physicians would be in the NYT, which is the go-to literature for me. I'm sure that all of your physicians already know what is truth and what is bunk.
Who is your audience? Medical students in their 3rd year or the country physician serving 5 counties because he is the only practitioner. A new resident or the PCP who has delivered 3 generations of babies. My presentation, my subject of study, my presentation would widely vary between those sets of people. A bar or line chart maybe including some bubble data would be good for the country doc and PCP. Whereas I'd use an Excel Advanced chart and graph in a Powerpoint presentation for the others. This is conjecture and I am not pigeonholing anyone or being prejudicial, because I put myself in the first category of a simple bar graph and good presenter. Because I'm older I can assimilate and understand information when presented in a familiar manner. I prefer writing as opposed to point & click. I prefer counting IV drops per minute rather than running it through a machine that counts for me. Etc.
The study producing data is what is important. The presentation should not be one size fits all.
1 - Find the most crucial need for a study
2 - Use the most reputable and truth seeking physicians to perform the trials
3 - Publish the outcomes in a format and journal pertinent to whom this study affects.
Above and beyond that anyone outside of your assumed interested group may pick it up and use it, and may or may not get it but you don't have to reinvent the wheel.
Of course, not being a physician I could have read your entire paper incorrectly leaving you all with just a nondescript, few paragraphs of nothingness.
IMHO, The NY Times is no different than the supposedly respected medical journals, in that they also choose to print subjects with conclusions supporting the mandated protocol, slanted so far as to turn over backwards. Not only do they appear to discriminate, but they remove already accepted and published studies on false charges or the clamor of the mob. Credibility is determined by the quality of the information, not the ‘respected names of authors’ who in many cases have shown themselves to have been compromised (high profile studies funded by grants that come with unspoken strings attached). Often the conflict of interest statements show no conflict or are not detailed enough (lying by omission). Will stop now, could go on, sorry.
Understood and agree. I could not come up with a simile for truthful and respectful content vs contrived, slanted, biased content. My initial thought was Playboy vs Penthouse.
Perhaps recalling the NE and a politician's baby (John Edwards) might be of interest. But since its sale, who knows. OTOH while Mr Edwards might actually have been a good President, he was forced out. Still his legacy live on https://www.politico.com/newsletters/west-wing-playbook/2022/05/09/the-shadow-of-john-edwards-00031097.
Trying to get there with that analogy. Maybe those publications could be contrasted as stylized vs sensational. Each has a market, but neither of them are professing to be a source of scientific research (I think). Interesting, though, but can’t quite get it to fit. Enjoying the thought process, thank you.
Actually, the National Enquirer articles about abduction by space aliens have become more believable than much of the tripe now published by the NYT.
Touché. :-)
If I recall correctly a similar proposal was made in Stuart Ritchies' "Science Fictions" which offered ideas to "right the ship" of the replication crisis.
On this argument you propose....:
"I’d hypothesize that if we attached some measure of journal quality (probably the impact factor) to each point (study) on the original funnel plot we would find that the higher quality journals routinely publish studies that fill the pipette of truth while lower quality journals routinely publish articles whose results fill the colander of distraction"
Can we trust "higher quality journals" when even the NEJM publishes obvious nonsense like "Lifting Universal Masking in Schools — Covid-19 Incidence among Students and Staff"?
https://www.nejm.org/doi/full/10.1056/NEJMoa2211029
Quick, obvious issues with the Boston study:
1) Figure 1 shows that the students in the schools which would eventually lift their mask mandates had much higher cases before the mandates were lifted, indicating that whatever caused these districts to have higher cases was happening before masks removed. Yet the authors cut this off by starting the graph in figure 1 in February, though it is clear cases were much higher in January. Classic technique of data drudging.
2) The authors apparently didn't realize that 13 of the schools they counted as "keep mandate" had successfully received an exception earlier (there was a condition that if you meet a % of vaccination you could be exempt from mask mandate).
You can cross reference this list with table S1 to see the 13 schools they missed:
https://www.cbsnews.com/boston/news/massachusetts-schools-mask-mandate-lifted-list-dese/
Example of one of the schools lifting it: https://www.kingphilip.org/important-mask-update-2/
3) One of the authors, when questioned on the lack of accounting for testing differences (many schools had students taking twice a week antigen tests, other schools used the CDC guidance that you only need to test for exposure when not wearing masks) argued that you should just trust her, because she has a PhD.
(edit: forgot to add source: https://twitter.com/EpiEllie/status/1557497452781096960?s=20&t=20X-EaQtKJAw3a0mwTzSTg)
4) The authors successfully organized a successful Change.Org campaign to get masks back on kids in Boston earlier that year, yet make no mention of this conflict of interest in their disclosures
https://twitter.com/EpiEllie/status/1429102872470433795
5) One of the authors had penned the Op-Ed in the Boston Globe "It's too soon to lift the school mask mandate", also didn't disclose this conflict of interest
https://www.bostonglobe.com/2022/02/11/opinion/its-too-soon-lift-school-mask-mandate/
6) Just as aside, almost all of the authors are on record being supportive of masking children prior to the study. Is it any surprise that they would be able to find high efficacy using one of the lowest tiers of evidence?
____________________
If the NEJM can publish nonsense like this, are there perhaps bigger problems to address before fixing meta research bias?
Great detail. Spend a bit of time demolishing some MMWR reports also gives one pause. Starting with a conclusion seems not good.
When it comes to current Truth(TM), the once-venerable journals (NEJM, Science, JAMA, etc.) have become as trashy as the least-good. This analysis is well done and I can give you similar takedowns (that should have been obvious to ANY reviewer without a propaganda agenda) to many articles in "the best" journals.
It was not always this way. I have not always agreed with the NEJM's editorial stance on things but for a long time they and their peers at least appeared to try to be rigorous. That went out the window with the TDS and the current administration -- now things only publish that are consonant with the narrative.
The funniest articles are actually those where the data (especially the "real" data in the tables) clearly shows one thing, but the conclusion/abstract is always (you can guarantee it without looking) "masks work" or "Vaccines are great". It has, sadly, become funny...and not funny good.
I agree. My PCP asked if I had gotten a covalent booster; I explained I was burned out on em at the moment - when not inappropriately pressed, I just let her know I didn’t want to be part of the “vaccine trial” and she gave me an understanding and knowing look and we moved along. I remember hearing about some pretty good evidence at one time for natural immunity and I trust those data.
Beautifully written and I couldn’t agree more.
In my brain, the inspiration to turn the tables, break the pattern and start reporting on studies that evaluate the media with numbers, and no adjectives, would be one of the most effective ways to raise this populations spirits. A dose of their own proverbial medicine.
Aside from the spite laced last sentence, behavioral modification by means of changing patterned behavior, has never had a better opportunity to correct a toxic cycle.
We need more scrappy young quarterbacks in the big game against integrity. If you're faced with a line of looming, defensive, cumbersome opponents, you have to scramble and change your pattern.
We had a similar idea to your IF ranking...
https://journals.lww.com/annalsofsurgery/Abstract/2016/12000/Underreporting_of_Secondary_Endpoints_in.18.aspx
We had a look at trials reporting wound infection as a secondary out outcome and essentially its as accurate as guessing... unless it was the primary outcome of the trial.
So we need to either do better measurements of secondary outcomes or stop the effort of reporting them.
Love it. Thanks.
- Or we might find that impact factor means squat.
- Don't forget this novel funnel plot use could itself have publication bias. What journals reject more trials that have negative results? High-impact or low-impact ones? This study might also find something about that. Or the results might also be thrown by the publication bias.
- Maybe also do this same type of study, but instead of doing 100 funnel plots, each representing 1 meta-analysis, do just 1 funnel plot that has 100 meta-analyses each on the same age-old question.
- Or maybe do it, but only examining efficacy of placebos. A kind of negative control.
- Also journal impact factor may not be the best measure. May want to in addition look at a standardized measure of the impact of the specific papers in question. The metric would have to adjust for year of publication being more or less "poppin'". One might find outliers, like the most inflammatory or the most inaccurate studies get shared the most. There might a a goldilocks zone. If you found that zone you could do the original study on journal impact factor all over again, but restricting analysis to studies that were in that goldilocks zone. Or do the same thing in the reverse order.
- Or maybe we may learn more about how we suck at interpreting funnel plots:
https://pubmed.ncbi.nlm.nih.gov/10812319/
https://pubmed.ncbi.nlm.nih.gov/16085192/
- Need low-impact due to gatekeeping. How many doctors know some RCT says black seed oil cures 40% of kidney stones? Or that a whole-foods plant-based diet can remit diabetic neuropathy? Or an RCT says melatonin cut covid mortality 90% in some hospital? etc etc etc.
- Lastly, if you find that low-impact journals have higher quality, make sure to submit your results to a high-impact journal so you can get rejected and publish low-impact. "Study in low-impact journal claims low-impact journals are better." Or conversely, "Study in high-impact journal says low-impact journals are better. Journal's impact factor soars."
It's 4am...hope this is still coherent in the morning.
We’ll done. Perfectly in spirit.