I contrast two population screening trials--DANCAVAS and NordICC. One stayed true to the principle of randomization. The other broke traditional intention-to-treat principles
The correct response to that NordiCC author is then why didn’t you randomize later in the process. It’s completely possible to create a realistic situation where people are invited to a regular physical, informed about potential screening for CC during the physical and then randomized if they consent. Some people will still drop out before the colonoscopy but few enough that they won’t impact a clear positive result.
Of course you wouldn’t be able to claim that everyone should have a colonoscopy but you would at least have a statistically valid trial on the population that actually matters in a practical sense.
I think the question Nordicc was asking was if broad-based population screening makes sense. The answer is NO it does not make sense. It wasn't a failed design. Vinay went over this a lot but with masking as the topic.
Not at all. Dr Prasad’s point was that trying to fix the design by leaving out the people that didn’t get a colonoscopy creates a potential bias and therefore the result cannot be trusted; this doesn’t mean that there is definitely no benefit. In fact he is clear that screening with sigmoidoscopy does have a benefit and that what remains unclear is whether the added cost and risk of colonoscopy is worthwhile.
In practical terms what is needed is to know whether a patient who is willing to consider screening after explaining the options should be recommended colonoscopy (or some other option). This is still a broad recommendation but can be done while minimizing the noise from those that would never agree to a colonoscopy while symptom free and are therefore not relevant to the question.
I dont think i phrased my parent post very well. I guess my question is this: does the broad population CRC screening we do in the US where colonoscopy is promoted as the gold-standard make sense in light of Nordicc?
Should we screen? Yes, we know that invasive screening with sigmoidoscopy increases survival. Should we use colonoscopy as the preferred option? Unknown. From a theoretical perspective it is incredibly unlikely that it has worse efficacy compared to sigmoid, and good reasons to think it is better. It also has increased risks and costs a lot more. A responsible CDC would see the need to answer an important question and fund a properly designed trial; maybe even get the EU to pay part of the cost. What I’d really like to see is a three arm study where the randomized procedure is non-invasive testing, sigmoidoscopy, or colonoscopy.
Too bad that most of your treatments (drugs) are nothing burgers. I can see that with my wife as drugs have ruined her gut biome among other bodily functions. Trials for poisons...what a way to make a living.
Actually, this is a simple topic. It becomes complicated if you don't understand randomization. Most experiments ask two questions.
1. What is the effect of being assigned to an intervention?
2. What is the effect of the intervention?
Generally, we are not interested in question 1. because we are studying the intervention and the important part is how it turned out. However, often we don't have good access to adherence to the protocol. In this case we have no choice but reporting answer to question 2.
One way to describe an experiment is that we hope to see if the intervention is effective, that is, we have a group randomized potentially confounding variables. The null hypothesis, then, is that the outcome will also look "randomized." If there is a different outcome for some of the subjects, you have "broken" the randomization by showing that some were not randomized to the intervention, which is what you are trying to find out.
Randomization is over before the experiment begins. If you are randomized, the outcome cannot affect the randomization. If you find out that they were not randomized -- e.g. were smokers, but did not report that during randomization -- you have to do the experiment over.
Once in the study, you cannot be un-randomized by the outcome anymore than you can become un-baptized. You can become a bad person, or you can become excommunicated, or whatever, but you were baptized.
Intention to treat asks question 1.
Per protocol asks question 2.
If you can answer both questions, you should report both.
We published this argument previously, and explained it in a blog post which I prefer.
There are really four populations. (1) those invited who screened; (2) those invited who did not screen; (3) those not invited who screened anyway; (4) those not invited who did not screen. Population (3) is really the fly in the ointment. Then there are the sub-populations of (1) and (3) who detected illness (1/3 A) and those that screened that did not detect illness (1/3 B). It would be interesting to know false positives and false negatives in these two groups and associated outcomes, including adverse side effects. Why? You might discover, for example, that sending an invitation has no impact on screening rates, or maybe it does. You might also find that screening is ineffective in accurately diagnosing disease and that the side effects resulting from false positives overwhelms the advantage of finding the disease early, or the opposite. If you were actually trying to figure out what policy to adopt would this information be valuable? I think it would. Why invite people to an ineffective screening? Why invite people to a screening that is more likely to cause harm than to advance health? Of course, once you start the study you probably have little ability to modify it on the fly without injecting bias. So it is best practice to really think things through before you start and to challenge your assumptions. Maybe screening is ineffective. Maybe screening is effective. I suspect that the study designers started from the assumption that screening is effective. But that assumption may not be true. Whatever the truth was, it would impact the results of this study.
A simple way of seeing the problem: viewing from the top, we have an RCT. Embedded inside the ‘assigned to colonoscopy arm’ is the equivalent of a classical observational study. We know that observational studies show that colonoscopies appear to have a benefit, but the point of this study is to learn whether an RCT will show a benefit. So, for the people who claim a benefit by comparing the two groups inside the ‘assigned to colonoscopy arm’, they are making the argument that observational studies prove causality and that RCTs are not necessary. They may not realize they are doing this.
My comment tried to explain this more simply: as you say, release all the data. If you don't have all the data -- e.g., you suspect some people spit out the pill -- you have no choice. You do what we always did, report the data you have.
The correct response to that NordiCC author is then why didn’t you randomize later in the process. It’s completely possible to create a realistic situation where people are invited to a regular physical, informed about potential screening for CC during the physical and then randomized if they consent. Some people will still drop out before the colonoscopy but few enough that they won’t impact a clear positive result.
Of course you wouldn’t be able to claim that everyone should have a colonoscopy but you would at least have a statistically valid trial on the population that actually matters in a practical sense.
I think the question Nordicc was asking was if broad-based population screening makes sense. The answer is NO it does not make sense. It wasn't a failed design. Vinay went over this a lot but with masking as the topic.
Not at all. Dr Prasad’s point was that trying to fix the design by leaving out the people that didn’t get a colonoscopy creates a potential bias and therefore the result cannot be trusted; this doesn’t mean that there is definitely no benefit. In fact he is clear that screening with sigmoidoscopy does have a benefit and that what remains unclear is whether the added cost and risk of colonoscopy is worthwhile.
In practical terms what is needed is to know whether a patient who is willing to consider screening after explaining the options should be recommended colonoscopy (or some other option). This is still a broad recommendation but can be done while minimizing the noise from those that would never agree to a colonoscopy while symptom free and are therefore not relevant to the question.
I dont think i phrased my parent post very well. I guess my question is this: does the broad population CRC screening we do in the US where colonoscopy is promoted as the gold-standard make sense in light of Nordicc?
Should we screen? Yes, we know that invasive screening with sigmoidoscopy increases survival. Should we use colonoscopy as the preferred option? Unknown. From a theoretical perspective it is incredibly unlikely that it has worse efficacy compared to sigmoid, and good reasons to think it is better. It also has increased risks and costs a lot more. A responsible CDC would see the need to answer an important question and fund a properly designed trial; maybe even get the EU to pay part of the cost. What I’d really like to see is a three arm study where the randomized procedure is non-invasive testing, sigmoidoscopy, or colonoscopy.
Excellent 👍
Too bad that most of your treatments (drugs) are nothing burgers. I can see that with my wife as drugs have ruined her gut biome among other bodily functions. Trials for poisons...what a way to make a living.
Actually, this is a simple topic. It becomes complicated if you don't understand randomization. Most experiments ask two questions.
1. What is the effect of being assigned to an intervention?
2. What is the effect of the intervention?
Generally, we are not interested in question 1. because we are studying the intervention and the important part is how it turned out. However, often we don't have good access to adherence to the protocol. In this case we have no choice but reporting answer to question 2.
One way to describe an experiment is that we hope to see if the intervention is effective, that is, we have a group randomized potentially confounding variables. The null hypothesis, then, is that the outcome will also look "randomized." If there is a different outcome for some of the subjects, you have "broken" the randomization by showing that some were not randomized to the intervention, which is what you are trying to find out.
Randomization is over before the experiment begins. If you are randomized, the outcome cannot affect the randomization. If you find out that they were not randomized -- e.g. were smokers, but did not report that during randomization -- you have to do the experiment over.
Once in the study, you cannot be un-randomized by the outcome anymore than you can become un-baptized. You can become a bad person, or you can become excommunicated, or whatever, but you were baptized.
Intention to treat asks question 1.
Per protocol asks question 2.
If you can answer both questions, you should report both.
We published this argument previously, and explained it in a blog post which I prefer.
https://feinmantheother.com/2011/08/21/intention-to-treat-what-it-is-and-why-you-should-care/
There are really four populations. (1) those invited who screened; (2) those invited who did not screen; (3) those not invited who screened anyway; (4) those not invited who did not screen. Population (3) is really the fly in the ointment. Then there are the sub-populations of (1) and (3) who detected illness (1/3 A) and those that screened that did not detect illness (1/3 B). It would be interesting to know false positives and false negatives in these two groups and associated outcomes, including adverse side effects. Why? You might discover, for example, that sending an invitation has no impact on screening rates, or maybe it does. You might also find that screening is ineffective in accurately diagnosing disease and that the side effects resulting from false positives overwhelms the advantage of finding the disease early, or the opposite. If you were actually trying to figure out what policy to adopt would this information be valuable? I think it would. Why invite people to an ineffective screening? Why invite people to a screening that is more likely to cause harm than to advance health? Of course, once you start the study you probably have little ability to modify it on the fly without injecting bias. So it is best practice to really think things through before you start and to challenge your assumptions. Maybe screening is ineffective. Maybe screening is effective. I suspect that the study designers started from the assumption that screening is effective. But that assumption may not be true. Whatever the truth was, it would impact the results of this study.
A simple way of seeing the problem: viewing from the top, we have an RCT. Embedded inside the ‘assigned to colonoscopy arm’ is the equivalent of a classical observational study. We know that observational studies show that colonoscopies appear to have a benefit, but the point of this study is to learn whether an RCT will show a benefit. So, for the people who claim a benefit by comparing the two groups inside the ‘assigned to colonoscopy arm’, they are making the argument that observational studies prove causality and that RCTs are not necessary. They may not realize they are doing this.
I think that you must remain true to intention to treat. Period. Until you publish the results that is.
And then release all the raw data and let us look at perfect world as treated just to see. Because you know. We want to know.
My comment tried to explain this more simply: as you say, release all the data. If you don't have all the data -- e.g., you suspect some people spit out the pill -- you have no choice. You do what we always did, report the data you have.
English language is part of it. Re-reading an old Blog post, I see that I had considered intention to treat : https://feinmantheother.com/2011/09/05/portion-control-low-carb-diets-and-the-language-of-food/
This is a complicated topic that you presented very well. Congratulations!