39 Comments
User's avatar
GBM's avatar

I disagree, John. If the editors recognized the controversial element to the discussion, they could and should require a revision of the conclusions. On the other hand, they could solicit a contrary editorial to bring out the deficiencies of the study. We in the medical profession need widespread education and exposure regarding faulty studies. Journal readers EXPECT careful and critical reviews. Subscribers need to complain to the editors and publisher that this flawed study was published without reviewers demanding a more fulsome discussion of limitations and an opposing view in an editorial. Physicians need to seriously consider their responsible to be critical especially in light of their continued general respect for the deeply flawed Tony Fauci.

Expand full comment
Michael Campbell's avatar

Dear John

Shocking example!

If you are collecting articles by authors on what I call 'quasi-experimental studies' , i.e. non-randomised prospective studies , who can't be bothered to do a proper analysis , here is another example:

Pan H, Zhou X, Shen L, Li Y, Dong W, Wang S, et al. Efficacy of apatinib +radiotherapy vs radiotherapy alone in patients with advanced multiline therapy failure for non small cell lung cancer with brain metastasis. Br J Radiol (2023) 10.1259/bjr.20220550

Propensity score matching is not a universal panacea, but can be useful. I wrote a discussion on propensity scores here:

247. Campbell MJ (2017) What is propensity score modelling? Emergency Medical Journal. 10.1136/emermed-2016-206542

Expand full comment
Michael Buratovich, Ph.D's avatar

Can some please explain “propensity matching?” Is it matching the probable outcomes between the two subject groups or is it making sure that the two subject groups have the same baseline health concerns?

Expand full comment
James R's avatar

Most MDs will only read the headline and not see that disclaimer or examine the methodology. So, it functions to avoid liability without reducing sales. I think that’s the answer.

Expand full comment
Zade's avatar

Thanks for these articles. I read each one and learn more about critical evaluation of results published in refereed medical journals. My field is infrared remote sensing, and even there, you encounter papers making great claims but based on slipshod methods. I think I was naive about medical papers, slow to understand you really can't assume the editors of medical journals will flag shoddy methods and weak analysis. I naively assumed because people's lives are often at stake that medical research rose above data laundering and cohort-tweaking. The mRNA and AAV jabs were the beginning of enlightenment. I see where humans are involved, nothing can be taken for granted.

Expand full comment
Frank Harrell's avatar

Very well written John. This is so embarrassing to statistical and epidemiologic practice. Promotion committees need to strike off any papers in that journal.

Expand full comment
Winton Gibbons's avatar

They really should put the warnings at the beginning.

Expand full comment
Jim Ryser's avatar

My own opinion is as old as the explanation has been correct over the years. Money. Surgeons did the study. Surgeons make $ when doing surgery. Cleveland Clinic is in the business to make money. The cynicism has led me to know my personal case and apply critical thinking to every decision I make about my health. Many don’t have that luxury and I find that sad.

Expand full comment
Linda McConnell's avatar

I dare take this one step further. Fame. (Unfortunately) how many readers will take this at face value? Maybe not see the flaws, or maybe see the flaws but discount them as trivial. The team: Look at us. See what we have done. Our names are in the ____ Journal. Our hospital will receive accolades. Who doesn't like to see their name in lights albeit for a minute.

Expand full comment
Jim Ryser's avatar

I completely agree with you. Fame is addictive - a taste is never enough.

Expand full comment
Steve Cheung's avatar

Brutal. Just brutal. We had a bit of a reprieve during ESC, when at least there were some good studies (positive and negative). But now it’s back to regular depressing programming. I guess i should be happy that flotsam like this at least doesn’t get into a big meeting.

The study is what it is. But why do journal editors allow spin language to the extent that they do? They can easily control this by saying: your data does not justify such a statement, disclaimer or no disclaimer ….change it or it’s not seeing light of day under our banner. But I guess they’re eyeing their bottom line as well….stick to principles too much and you have nothing to print.

Which goes to my bigger point: maybe there are too many journals? Does JACC really need 6 or 8 or however many sub publications? There is the counterpoint that niche journals serving a niche field with a niche audience still serves a purpose. But maybe that purpose doesn’t deserve servicing monthly or biweekly? So have a venue for niche stuff. But you can still be selective and not publish crap….and maybe that will spur researchers to not write crap.

Expand full comment
Zade's avatar

The journals know that device and pharma companies will buy reprints and distribute them to doctors if the papers make their products sound good. There can be pretty good money for the journal if they sell, say,100k reprints. Plus some reviewers are too busy to read the article critically and then the editors will give the paper the go ahead.

Expand full comment
Benjamin Hourani's avatar

The study of the week should be relabeled as “the study of the weak”

Expand full comment
Mary Pat Campbell's avatar

Who were the "peers" who reviewed this?

It's not just a matter of the researchers, though of course it reflects mostly on them. But also the editors and reviewers who let this all pass.

Expand full comment
TheyLiveAndWeLockdown's avatar

Yes, not only should this "study" be retracted the authors and peer "reviewers" need to be banned from further publishing work.

Expand full comment
Mary Pat Campbell's avatar

Thing is -- sloppy work like this is par for the course.

It would be lovely to make an example pour encourager les autres, but the bloodbath would be horrendous should we start going through all the studies, eh?

(and yes, one should)

Thing is, I wonder how many people actually carefully, critically read these? They're published, but not really with the intent of being read.

Expand full comment
Sobshrink's avatar

Not only is it in a top journal, but its authors are at a hospital consistently ranked in the top 5, and often the #1 hospital, for cardiology. It was where I planned to go if I ever needed cardiology care, but not now! So beyond your Substack and Twitter/X, have you contacted the authors to hear their rationale for not doing propensity matching, and writing such a misleading abstract, which is all many (most?) doctors have time to read? Have you considered writing a Letter to the Editor of the journal with your concerns? I have always known that surgeons tend to be biased towards doing surgery, and it's good to get a second opinion from a non-surgeon. I hope all the non-surgeon cardiologists read your Substack! :) BTW, I agree that writing an article about sham surgeries is a great idea, as well as the best alternative research designs when sham surgeries are not practical/ethical.

https://www.cleveland.com/news/2023/08/cleveland-clinic-again-wins-top-spot-for-heart-care-from-us-news-see-the-changes-in-2023-24-rankings.html#:~:text=CLEVELAND, Ohio — For the 29th,2023-24 Best Hospitals rankings.

Expand full comment
Tom Kaier's avatar

Thanks for picking this up. I was keen to hear your thoughts, when I followed your twitter comment I didn’t have access to the full article and the abstract does not allow to draw the same informed conclusions. It is remarkable how this got through, and the paywalls associated make this ever harder to unpick.

Expand full comment
Patricia's avatar

It should induce cynicism, or at the very least deep skepticism. None of our institutions are trustworthy, academic or otherwise, and the sooner everybody accepts this and evaluates evidence and makes decisions accordingly, the better off we will be.

Expand full comment
Brad Banko, MD, MS's avatar

Lead time bias

Expand full comment
HMMK's avatar

Even I, a lowly medical student, can see this from a mile away. We learn lead time bias in our first year and are tested on it in multiple national exams.

Expand full comment
Surya's avatar

"The comparison, therefore, was between a healthier group vs a sicker group. That is surely why the survival curves separate in the first few months".

Have a statistical?. I totally agree with the choice of the words and assumptions authors make. However, my ❓ is in this para u mention, aren't u assuming ( with so much sureity) based on ur presumption that healthier patients did well( what if it's play of chance and not so much about "logic"). If yes, what would be a better choice of words for the critical appraisal? If no, please educate me.

Expand full comment
Surya's avatar

I meant I agree with you regarding the poor choice of words and assumptions authors made and don't agree with it

Expand full comment