Discussion about this post

User's avatar
The Great Santini's avatar

There are really four populations. (1) those invited who screened; (2) those invited who did not screen; (3) those not invited who screened anyway; (4) those not invited who did not screen. Population (3) is really the fly in the ointment. Then there are the sub-populations of (1) and (3) who detected illness (1/3 A) and those that screened that did not detect illness (1/3 B). It would be interesting to know false positives and false negatives in these two groups and associated outcomes, including adverse side effects. Why? You might discover, for example, that sending an invitation has no impact on screening rates, or maybe it does. You might also find that screening is ineffective in accurately diagnosing disease and that the side effects resulting from false positives overwhelms the advantage of finding the disease early, or the opposite. If you were actually trying to figure out what policy to adopt would this information be valuable? I think it would. Why invite people to an ineffective screening? Why invite people to a screening that is more likely to cause harm than to advance health? Of course, once you start the study you probably have little ability to modify it on the fly without injecting bias. So it is best practice to really think things through before you start and to challenge your assumptions. Maybe screening is ineffective. Maybe screening is effective. I suspect that the study designers started from the assumption that screening is effective. But that assumption may not be true. Whatever the truth was, it would impact the results of this study.

Expand full comment
Richard Feinman's avatar

Actually, this is a simple topic. It becomes complicated if you don't understand randomization.

I made it complicated (including an error) by not adequately proof-reading and, originally by making the discussion too general. This is the revised version.

The problem is easiest seen with an example.

You are studying the effect of an antidepressant pill on a cognitive test like matching words. There are two groups, one who takes he pill and a control who takes a look-alike placebo. Both are randomized to relevant variables, age, health status, etc.

100 people are assigned to the the pill group. The outcome is that 30 people show a increase in performance on the test. You would report that people in the pill group have the measured increase. That's what we always did. The measured effect of being assigned to take the pill is an increase of 30 %.

However, you suspect that not everybody took the pill for whatever reason. The experiment is repeated with TV cameras and blood test for presence of the pill. It turns out that, for some reason, only 60 people actually took the pill. They included the 30 who has decreased performance, that is, 50% of the people who took the pill showed an effect.

This is called per-protocol, that is, the subjects did what you told them to do. You report that the drug is 50% effective. In addition, you report that there is only 60% adherence. Again, this is what we always did.

Now, what's odd is the appearance of the idea of intention-to-treat which says that, even if yo know that only 60 people took the drug, you must report the data as if all 100 subjects took the drug. It doesn't make any sense. It's, in fact, false and misleading.

Where did it come from? It doesn't make sense. The answer is that frequently all we know is how many people were told to take the drug and what the outcome was. We suspect people may have lost the pill or spit out or whatever but we don't know this so we have no choice to report the outcome as a fraction of all the people in the group. As in the example above, it's. what we always did because it's all we could do and did not give a name.

Intention-to-treat answers the question: what is the effect of being assigned to an intervention?

Per-protocol answers the question: what is the effect of actually following the intervention.

Usually, we are not interested in the actual assignment to the study group. The important part is, if we follow instructions, how will it turn out. Again, as the experimenter, we frequently don't have good access to adherence to the protocol. In this case, we have no choice but reporting the data we have.

The confusion is in understanding randomization.

Randomization is over before the experiment begins. If you are randomized, the outcome cannot affect the randomization. If you find out that subjects were not randomized -- e.g. they were smokers, but did not report that during randomization -- you have to do the experiment over.

Once in the study, you cannot be un-randomized by the outcome anymore than you can become un-baptized. You can become a bad person, or you can become excommunicated, or whatever, but you were baptized.

Intention to treat asks what is the effect of HAVING BEEN TOLD to follow instructions.

Per protocol asks what is the effect of following instructions, that is, what is the effect of the intervention. We discussed this in a blog post and a publication (link in the post).

https://feinmantheother.com/2011/08/21/intention-to-treat-what-it-is-and-why-you-should-care/

Expand full comment
19 more comments...

No posts