To close our introduction to churnalism, we present a topic from a common, yet different form of science and medical journalism. Thus far we have dedicated ourselves to media stories that report on published, scientific studies. This next example can be thought of as a product of the Tom Wolfe school of experiential journalism: I tried it, it works, and that should be enough proof. This is the type of news story that does not cite a scientific article. It is narrative nonfiction but only nonfiction in the sense that it happened, not because it reports truth.
From the Houston Chronicle comes a story entitled, “Feeling the drip, drip, drip of the mobile IV craze: Proponents say it can fix everything from hangovers to the common flu.” This article describes a van that drives around and gives interested customers an intravenous infusion of vitamins. Early in the article, the journalist writes, “While the health benefits of these infusions is still being studied by health experts, and doctors generally recommend eating food for your nutrients…” but then immediately commits the “disclaim and pivot” sin, proceeding to a powerful, personal anecdote:
The contents of the IV bag take over my body quickly. My chest feels cold, like I've just swallowed a big glass of ice water. After a few minutes a great euphoria hits and my entire body feels at ease. I have a desire to hear early period Neil Young in an almost post-orgasmic haze. It's at this point I remark on how plush the seats in the van are.
"Yeah, you're in business now," Black laughs, standing over me wearing black rubber gloves. "Most people don't know what healthy feels like anymore."
Wait, there’s more:
"We have been able to take care of a lot of marathon runners before and after they ran. We've done CrossFit competitions, the Iron Man and lots of other races and events," Black says. "We also have clients who have cancer, Parkinson's, migraines and chronic dehydration issues."
What is the problem here? This story does nothing to separate the potential benefit of the contents of the IV bag, a medley of vitamins and minerals, from the benefit of an infusion of salt water, from the idea of getting an IV infusion. How could this proposed therapeutic intervention be appropriately studied? The simplest way to do this is a quick experiment.
Take 100 customers and divide them into 4 groups. A quarter of the volunteers get the IV bag of vitamins, a quarter get a similarly tinted (light yellow) bag of saline (salt water), a third get a nice big pitcher of lemonade to drink and an IV (attached to that magical bag of vitamins) but the IV is not started – a sham IV. The last quarter of the patients just get the lemonade. All the people get hydration and the first two groups get the added benefit that they are getting a “powerful” intravenous infusion. You then ask all the customers questions to gauge how they feel. We would be happy to wager on the results of this study. All the groups would feel better, having gotten some fluid. The first three groups would feel even better, having been “more aggressively” cared for. There would be no benefit seen in the vitamin group over the saline or sham IV group as any benefit that group might get is all from the placebo effect.
You have certainly heard of the placebo effect. A placebo is any intervention that is not known to have, or intended to have, physiologic benefits. In studies of medicines, a placebo is often a pharmacologically inert pill. When patients are given a placebo and have an improvement in their condition, beyond what is expected, we call that the placebo effect. The placebo effect is real and has been supported in many reliable studies. It is due to actual physiologic changes in the body. There is nothing wrong with using the placebo effect to make people feel better. Every good doctor uses the placebo effect to benefit his or her patients. You have probably gone to the doctor once with some minor issue. Your doctor spent time listening to you thoughtfully and patiently examining you while wearing a white coat. If you left the office feeling better, you have experienced the placebo effect.
When Adam gets a headache he usually takes an acetaminophen tablet and goes for a run. He has no idea if either of these interventions, or their combination, works better than just lying on the couch for an hour. They might work pharmacologically, or they might work because he thinks they will. What is important is that the acetaminophen/run intervention is inexpensive, harmless, and does not delay him from getting necessary medical attention. You can certainly see why using placebos that are the opposite, those that are expensive and carry risk, either intrinsic or through an opportunity cost, are considered unethical. Ironically, placebos that are more expensive or more invasive often lead to larger placebo effects than ones that are cheaper and feel less substantial.
Occasionally in medicine we run studies that tease apart the placebo effect from the benefit of the treatment itself. This happens when we give half the patients an inert pill (often called a sugar pill) that looks indistinguishable from the treatment, or, for procedures, half the patients get a sham intervention that looks and feels like the real thing but omits the vital step. If you look across many of these studies, you often find that the procedure or drug that you think helps people feel better is no better than the sugar pill or sham intervention. But, more pertinent to the placebo effect is that often both the treatment and sugar pill groups do better than they did at the start of the study. This speaks to the remarkable human desire to feel better.
The lack of science involved in this article relegated it to the dung heap of churnalism. There is not a hypothesis being tested. It is a journalist reporting an experience. The article is even worse than the one about the woman feeling better after the cold swim because this writer so clearly wants to believe. He writes,
“As I step out of Black's van after the treatment, I notice the trees are greener, the early evening air is cooler and my feet almost bounce in running shoes. I feel like that Nicolas Cage GIF from "Con Air" when he steps off the prison bus, with the wind blowing hair back and a close-eyed smile appreciating the moment.”
He does not ask the hard question: is there something to this or was it just my expectation. How would I have felt if I just got a saline infusion, or a nice cold glass of lemonade from some kids selling it for 50 cents on my corner? What did I just pay for?
Conclusion:
In these final posts introducing the concept of churnalism, we have looked at whether saunas prevent Alzheimer’s disease, whether a positive attitude makes you live longer, and whether IV vitamin infusions give you energy? What ties these together are that they are behaviors, some voluntary, some not, that are being examined for health effects.
For most of the things we do in life, we do not stop to ask about health effects. During the writing of this chapter we drank a cup of coffee, checked our email, tweeted, put on a sweatshirt, went for a swim, styled our hair (at least, one of us did), bought a concert ticket for the weekend, ate a couple of almonds, made our beds, and thought about using up the kale in the fridge for dinner. While reading this chapter you probably did some of those things, and many others, too. Maybe you even took a nap or, in a fit of frustration, slammed your laptop shut. While some of these activities, like kale consumption and swimming, are commonly tested for health benefits, others, like hair styling and concert tickets purchasing, rarely make the health news. We think saunas might have some health effects, but do not think concert-going does. The challenge with these kinds of health news articles that look at behaviors is to consider why we are thinking about this practice and not another. Usually, the activities covered have two things in common: they are modifiable and there is a folk belief that there is a health benefit.
The news articles discussed in our introductory posts have demonstrated all of the seven deadly sins of churnalism. The authors frequently forget that observational studies do not prove causation and that correlation is not the same as causation. They generalize and extrapolate results to make us believe that findings observed in the lab or in 4 healthy women are important to all of us. They neglect to acknowledge real problems, like confounding, in the underlying research. They promote wholly implausible findings and, rather than demonstrate curiosity by exploring what underlies the results, they disclaim and pivot, simply reporting theories put forth by those generating the data.
We ended or discussions of exercise and activities and outlook not with examples of churnalism but with research that demonstrates why so many of the news articles are flawed. This research proves that our sixth sin, “keep testing; report just once,” is a grievous problem in the medical literature. Researchers look at relationships concerning what we eat, how we exercise, or how we behave, but they only choose to test some relationships. Of the relationships they explore, they use a variety of methods which yield different results. Only some of these results get written up; only some get published; only some make it into print, radio, or podcasts.
Let’s end this post with one final research study that demonstrates that what we read is just one possible interpretation of data. In this investigation, Silberzahn and colleagues performed a clever experiment. The question they asked was, do darker skinned soccer players receive more red cards from referees than lighter skinned players. You are probably already imagining the churnalism headline: “Racist Refs Give Black Players More Red Cards.” But the researchers were not out to prove this, they were actually interested in demonstrating how fraught interpreting this kind of data is.
The researchers assigned a data set to 29 teams. The data set contained the players’ skin tones (on a scale of 1-5), their predilection to red cards, and variables like position, weight, height, and the referees’ country of origin. The data also included the number of games in which the referee and player encountered each other and the players’ ages, clubs, and leagues.
What the researchers found was that the analytic approaches varied widely across the teams, and as the approaches varied, so too did the results. One team found that having dark skin made a player 10% less likely to get a red card. Another team found that having dark skin increased a player’s red card risk by 300%. The rest fell somewhere in between. About 70% of the research groups found that having darker skin raised your risk of getting a red card while about 30% found no association. The 29 teams used 21 unique analytic techniques. The investigators noted that, “Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability.”
Like the finding that nearly every ingredient may or may not cause cancer or that there is a “vibration of effects” when we consider cause and effect relationships, these findings suggest that variation in the results of analyses of complex data is probably impossible to avoid. Going beyond the reality that these sorts of observational analyses are unable to show causation, analyses like this one make it clear that even proof of association is suspect.
Darker skinned players may be three times as likely to receive penalty cards or they may be equally likely. Only one team found lighter skin players more likely to receive red cards. The rarity of this finding might be because it was incorrect or it might just reflect the fact that such a result was not plausible to researchers. We live in a world where discrimination is against darker skinned, not lighter skinned, people. Imagine if researchers repeated the experiment but did not tell the teams that the 1-5 scale went from lightest to darkest. Or if they purposely misinformed the analysts about the color scale. We suspect that in those cases some analysts would find that more penalty cards were given to lighter skin players.
The expectations of what is a reasonable question and what are plausible answers are inherently tied to health news. It is the reason why some daily activities enter researchers' agendas, and others do not. It is the reason why only some topics are covered in the news. This bias, if that is the correct term, is a natural, human part of science, but one that is essential to understand as one thinks about health and science research and seeks out the highest quality information. We fit data into narratives that make sense to us. Often in life, this can be a great strength. At other times, it can terribly mislead.
This therapy at least makes sense to me - for those who are having trouble absorbing vitamins and minerals or who are on chemo and don't feel like eating, this gets them into the bloodstream. Seems like the same thing as giving someone a B12 shot because they can't absorb the nutrient from their food. I do appreciate though that a RCT would be a good thing.
The lemonade suggestion has some obvious problems, lol. Lemonade has lots of sugar but no salt. Saline has lots of salt but no sugar. Obviously one is better for repleting intravascular volume than the other. ;)