Fraud, Distortion, and Truth in Science
This week Freakonomics has an interesting podcast about science. The theme of the episode is why is there so much fraud in our field? The episode details examples of academic papers with fraud — the data were fabricated, and there were many tell tale signatures of fraud along the way. It explores how and why this happens. It centers on the recent scandal of Francesca Gino and Dan Arieli.
At one point, a guest offers that he suspects 5% of science contains fraudulent data. That’s a wild estimate, though I admit, it fits my intuition, but it is safe to say there is no good evidence to support that claim — at this point. We don’t know how often fraud — aka making up data — occurs.
What is left out of the discussion however is that fraud is not the greatest threat to science. It is an extreme — where the conclusions are neither true nor useful because the data are cooked. The far more dangerous thing in science is that nearly all the conclusions we report are neither true nor useful — not because of fraud — but because we lack commitment to doing high quality, adequately powered and pre-specified work.
The sad truth is nearly none of science is true and useful.
How does this happen? Well first consider that most biomedical science is about two things: describing the world and improving it. Let me tackle them in the opposite order.
In order to improve the world, you need to (a) have an intervention or plan and (b) have some confidence that implementing it makes things better off. The easy part here is having a plan. Lots of people — from geniuses to lunatics — have a plan and reasons to believe it might work. The hard part here is causality. The world is riddled with links — mostly spurious — how do you figure out which ones actually do what you think?
We live in a world of activists. Young people who want to do good. Sadly, they are typically so poorly trained in critical thinking and causality that ~100% of what they suggest just doesn’t do anything of value. Much of it is harmful. There is nothing more dangerous than a person who wants to do good, but doesn’t know what works.
Science then barely helps us. Many studies lack controls. Many are hopelessly flawed with confounding or bias. Most academics create causal conclusions that aren’t true. Even when they are true, they can’t be scaled. They don’t work when you apply it writ large. This has been called the efficacy effectiveness gap.
Now consider the second question. Our job is to describe the world. Here too we fail. Typically, we claim that something predicts or is associated with a bad outcome, but we didn’t adequately deal with all the other things that are already known to predict that bad outcome. We claim to have found something new, but we aren’t adding anything.
The entire field of science is full of careerists who churn out low credibility studies. It is astonishing. Only in extreme situations, with lots of pressure to deliver, and poor quality control do people overtly manufacturer data, but even in the banal instances, the conclusions are neither true nor useful.
Science — as a sociologic construct often feels less about the truth — and more about a specific type of welfare program to children of rich parents who were clever in school.
Final thought. Adam Cifu’s post on Peter Marks and Bob Califf recently was great. To me the most interesting point is that these two are perfectly positioned to make Pfizer run the randomized trial that would answer the question, but they refuse. This is emblematic of the bigger failure in science.
So, as much as I appreciate Freakonomics, I wish we talked more about the deeper problem in science.
Sensible Medicine is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.