Just as a note to the discussion of the SAMSON trial:
You failed to mention that they excluded participants with clinically measureable effects. This implies that statins generally have no side effects. From the exclusion criteria in the appendix:
• History of statin intolerance with creatine kinase elevation greater than 5 times the upper limit of normal (ULN)
• History of statin intolerance with anaphylaxis
• History of statin intolerance with myalgia and rise in serum creatine kinase
• History of statin intolerance with rhabdomyolysis
• History of statin intolerance with liver function abnormalities, defined as aspartate aminotransferase (AST) or alanine aminotransferase (ALT) >3 times the ULN
. Further:
• In clinical judgement of study doctor, participant should not be enrolled on the study
.• Side effects taking longer than 2 weeks to present (because in such participants much longer blocks of treatment would be required, if the present study is positive such studies will be planned for the future) [note: in the last treatment obviously]
Or, in other words: when you exclude clinically symptomatic people as well as those which developed side effects >2 weeks after statin therapy onset, you don't see a statistically significant difference with regard to side effects between one specific statin and placebo in previously statin intolerant
You can do an experiment without a control arm. All you need is a control, but it can be before and after. Suppose you follow a defined population for years and measure a parameter (not used to define the population). Then you ask this population to take a red pill with nothing in it. You observe the decrease in the parameter. Then you ask the population to stop taking the pill. You see an increase to initial levels. We can conclude that there is a placebo effect.
Big point this study misses: how did the psychiatrist's words affect each patient's chance of symptom reduction or remission?
We think of words as incidental, but it's the other way around: words can make us happy or sad without taking any drug. How the psychiatrist frames the patient's chances of improvement matters - maybe even moreso than the effect of the drug itself.
See also this study, where patients who were /covertly/ administered diazepam did not experience nearly the same symptom reduction as those who received the drug openly:
Psychiatry is easy manipulated because it is all in the mind and has no concrete foundations. Cancer and heart diseases are different in that many of the problems are visible. I trust no psychiatric studies, trials or otherwise. As far as placebos go, there is no way to tell the state of a placebo since there too many variables within a human at any given time.
Excellent analysis, Dr. Mandrola, of a study that shows how statistics, when used with enthusiasm, can violate the principles of the epistemology of science and convince the unwary, that is, almost all readers of biomedical science. Respect to a study on efficacy and safety, inferences about the null hypothesis of no difference between two groups, which would indirectly test the hypothesis about the intervention's mechanism of action (in this case, a psychiatric medication), and in which the placebo is the "control," can only be extrapolated to the concurrent comparison. So, between active and placebo arm.
Conducting a meta-analysis of the "before and after" of the experimental placebo groups is an interesting way of statistical manipulation that violates principles of causality and is a "bocatto di cardinale" for detractors of psychiatric medication.
By the way, the bias represented by this study is called in Latin, "post hoc ergo propter hoc," which means assuming that the subsequent change in the placebo group extracted from each group was due solely to the placebo effect. Besides regression to the mean, there are additional explanations for the before-and-after difference, such as the bias of knowing one is being observed (the Hawthorne effect), the Pygmalion effect, and the "desire to improve," which could have existed differentially in each individual experiment.
By the way, the bias represented by this study is called in Latin, "post hoc ergo propter hoc," which means assuming that the subsequent change in the placebo group extracted from each group was due solely to the placebo effect. Besides regression to the mean, there are additional explanations for the before-and-after difference, such as the bias of knowing one is being observed (the Hawthorne effect), the Pygmalion effect, and the "desire to improve," which could have existed differentially in each individual experiment.
The placebo/nocebo effects have been demonstrated in many studies over the years. Frankly, I don't see where a non-placebo arm for a study would add to the understanding. These effects, by definition, always apply to subjective end points. Psychiatric disorders are probably the last area I would explore to explain these very common phenomena. Most physicians with experience would agree that one's health and the way one feels are two different things. Obviously they are related but the reasons for the differences are speculative and will probably always defy scientific explanation. As cogently explained in the comment below by M. Makous, the whole point of the placebo is to remove these poorly understood psychological factors from consideration in evaluating a treatment or procedure.
Excellent analysis, Dr. Mandrola, of a study that shows how statistics, when used with bias, can violate the principles of the epistemology of science and convince the unwary, that is, almost all readers of biomedical science. Discussing a study on efficacy and safety, inferences about the null hypothesis of no difference between two groups, which would indirectly test the hypothesis about the intervention's mechanism of action (in this case, a psychiatric medication), and in which the placebo is the "control," can only be extrapolated to the concurrent comparison. Conducting a meta-analysis of the "before and after" of the experimental placebo groups is an interesting way of statistical manipulation that violates principles of causality and is a "bocatto di cardinale" for detractors of psychiatric medication.
Don't most psychiatric trials involve at the least substantial interaction with clinical staff, and often therapy as well? It makes sense to compare placebo to active compound in this context, but why would you look at placebo effects in isolation when other treatment (even just sustained caring attention from someone with authority) is involved? You wouldn't expect cholesterol to respond to being measured by a nice person every week, but it's weird to call that a "placebo effect," the implied effect of nothing, when treating a socially mediated condition like depression. The ordinal ranking of strength of treatment without active medication by condition (depression, GAD and panic disorder up top, mania, OCD and schizophrenia at the bottom) has more validity, though. (Someone recently suggested in a comment thread that perhaps the efficacy of benzodiazepines in catatonia is due to placebo effect, which was a little funny but made me reflect on how misleading it can be to generalize about psychiatric conditions in public-facing communication.)
In actual practice, the researchers must inform the subject "You have a 50-50 chance of getting a placebo v the real drug. The placebo is a dummy pill with no medication.. Neither of us knows which one you are taking." Thus, the researcher has undermined the very point of a placebo, which is to fool the recipient. In a placebo controlled study, the point of the placebo is to REMOVE the placebo effect from the active medication, not to study the placebo effect itself.
A true placebo has to fool the patient: "This pill is the latest medication with powerful effects that studies have shown to be particularly effective in treating your [condition]. It is quite safe with no known side effects or interactions." The prescriber should have much gravitas with a confident reassuring tone. This bold falsehood is at the heart of a placebo, and could not pass an ethics board. Hence, we never see a bona fide medical study on the placebo effect.
(In my understanding, the essence of social psychology is a combination of placebo/nocebo effects, so there are probably lots of non-medical studies looking at some form of placebo and their effect on social behavior.)
When you say “Another reason you need non-placebo arms is that simply being on a tablet can cause affects in the body.”, I think you mean “effects”, not “affects”. Then again, in talking about psychiatry, maybe you do mean affects”.
This editorial and Dr Murphy’s excellent article are must-reads. They also teach why the widely used change from baseline in treated patients does NOT measure patient response to treatment.
Interesting read! However, isn't this only true if we are tracking pre and post scores as opposed to tracking changes over time? So, for example, suppose we track depression scores in a placebo arm and treatment arm starting at 3 weeks before treatment, and then 4 weeks post, would this not also solve the issue?
If you knew the regression to the mean time, on average, for each condition tested, yes (null is true vs false). However, to quantify the absolute effects, you need a no placebo arm.
Hmm, that's a good point, though I'm not sure you have to have regression to the mean time... As long as I have multiple data points pretreatment (so that I can establish a reliable baseline), don't I only need to show that on average, patients were stable before intervention-->improved on intervention-->regressed post? Also, if this is the case, why can't I look at average score pre-intervention vs avg during placebo treatment and quantify effect that way?
A study long ago by C Edward Davis et al showed that if a patient passes a triple screen (3 successive clinic visits) for LDL cholesterol > 190 there is still regression to the mean with a surprising proportion of patients having LDL < 190 immediately before randomization to a cholesterol lowering drug.
The SAMSON trial is interesting. “Each day the patients recorded how they felt on a smartphone app.”
How did they verify it? Was it the patient who operated the smartphone? Or their kids? Or a cat?
Also… Are (all) trial subjects pre-warned about placebos? If they are, and they sign the paper to this, the very fact of knowing that they will be given fake stuff now and then may affect their physiology (just imagine the anger).
If they are not informed, they probably know this from other sources (the internet and family). So they know that they will be given fake stuff and they know that they were not told so - thus, lied to twice. Just imagine the anger.
Study participants consent to being randomized to a treatment condition. They are aware they may be getting a placebo. No one is forced to participate and no one is lied to.
Whatever the participants know (whichever way) will activate biochemical processes within their bodies. Some may have hope, some may be indifferent (or giving up), some may feel that they deserve to get the drug and not some placebo (that’s the perceived lie I referred to). High hopes, despair or other feelings may be perceivable in bio-tests.
So, a whole range of reactions / processes are dismissed, even if they may prove to be decisive for the outcome. This is what probably plays strong in “spontaneous” self-curing cases. This is what certainly is decisive in tribal ceremonies. This is crucial in psychology / psychotherapy. All three areas are not rare or unusual, so it’s baffling why the mainstream medicine won’t look into it at all.
The effects of psychological expectation are the primary reason we use placebo arms in the first place. Because the participant doesn't know which condition they've been randomized to, in a large enough sample those effects will, on average, be the same between conditions. Ideally the only difference in the experimental protocol will be whether they've received the intended treatment or the inactive one. In a double-blind procedure, the experimenters don't know which participants are in which condition until after the data has been collected, removing the impact of their expectancies on measurement. This is how experimental medicine works, and there are lots of great resources that can get you up to speed if you are interested. I don't know why you think that the effects of expectation are being ignored by mainstream medicine when then are very specifically being addressed by placebo controls.
(I have invariably found that when someone says "mainstream science/medicine refuses to look into x" they have not checked whether this is true, because there are always plenty of papers on the topic, often from diverse disciplines, some explicitly taking the perspective the commenter is arguing from. Here's one for you, Dan: https://www.acpjournals.org/doi/abs/10.7326/0003-4819-136-11-200206040-00011)
Placebo eliminates only one effect: yes or no guessing. But it does not handle any emotions related to the situation of the subjects, their talks with family members, swings of emotional states, and so on, all I wrote about.
All these emotional and psychological factors are different for different subjects - which brings in a storm of uncontrollable and undeterminable effects which are completely neglected by the protocol. Metaphorically, every subject is “contaminated” with emotional burdens which may and do affect their condition and the outcome of the trial. We just don’t admit it.
However, since these are subjective experiences, no-one can really quantify them, and they will remain that extra factor which may trigger curing or contribute to worsening the condition regardless of the value (or its lack) of the drug or therapy.
My point is that by omitting the human factor we are still in the blind. We only think that double-blind is objective, while it is far from it. I would say that placebo and randomization are only an elegant excuse not to look into the subjective nature of curing, spontaneous curing or self-curing - including its effectiveness. That’s obvious: if we examined it, and found it statistically significant and medically important… why manufacture any drugs? Why educate doctors? Why invent extremely expensive technology? No more patents, no more royalties.
BTW. All trials are sponsored, i.e. funded by particular economic interests. How does this affect the course of the trial (including the selection of subjects)? Shouldn’t we double-blind the funding mechanism? So that the center does not know whether they get any money - ever - for their work?
Just as a note to the discussion of the SAMSON trial:
You failed to mention that they excluded participants with clinically measureable effects. This implies that statins generally have no side effects. From the exclusion criteria in the appendix:
• History of statin intolerance with creatine kinase elevation greater than 5 times the upper limit of normal (ULN)
• History of statin intolerance with anaphylaxis
• History of statin intolerance with myalgia and rise in serum creatine kinase
• History of statin intolerance with rhabdomyolysis
• History of statin intolerance with liver function abnormalities, defined as aspartate aminotransferase (AST) or alanine aminotransferase (ALT) >3 times the ULN
. Further:
• In clinical judgement of study doctor, participant should not be enrolled on the study
.• Side effects taking longer than 2 weeks to present (because in such participants much longer blocks of treatment would be required, if the present study is positive such studies will be planned for the future) [note: in the last treatment obviously]
Or, in other words: when you exclude clinically symptomatic people as well as those which developed side effects >2 weeks after statin therapy onset, you don't see a statistically significant difference with regard to side effects between one specific statin and placebo in previously statin intolerant
You can do an experiment without a control arm. All you need is a control, but it can be before and after. Suppose you follow a defined population for years and measure a parameter (not used to define the population). Then you ask this population to take a red pill with nothing in it. You observe the decrease in the parameter. Then you ask the population to stop taking the pill. You see an increase to initial levels. We can conclude that there is a placebo effect.
Big point this study misses: how did the psychiatrist's words affect each patient's chance of symptom reduction or remission?
We think of words as incidental, but it's the other way around: words can make us happy or sad without taking any drug. How the psychiatrist frames the patient's chances of improvement matters - maybe even moreso than the effect of the drug itself.
See also this study, where patients who were /covertly/ administered diazepam did not experience nearly the same symptom reduction as those who received the drug openly:
https://www.thelancet.com/journals/laneur/article/PIIS1474-4422(04)00908-1/abstract
Psychiatry is easy manipulated because it is all in the mind and has no concrete foundations. Cancer and heart diseases are different in that many of the problems are visible. I trust no psychiatric studies, trials or otherwise. As far as placebos go, there is no way to tell the state of a placebo since there too many variables within a human at any given time.
Excellent analysis, Dr. Mandrola, of a study that shows how statistics, when used with enthusiasm, can violate the principles of the epistemology of science and convince the unwary, that is, almost all readers of biomedical science. Respect to a study on efficacy and safety, inferences about the null hypothesis of no difference between two groups, which would indirectly test the hypothesis about the intervention's mechanism of action (in this case, a psychiatric medication), and in which the placebo is the "control," can only be extrapolated to the concurrent comparison. So, between active and placebo arm.
Conducting a meta-analysis of the "before and after" of the experimental placebo groups is an interesting way of statistical manipulation that violates principles of causality and is a "bocatto di cardinale" for detractors of psychiatric medication.
By the way, the bias represented by this study is called in Latin, "post hoc ergo propter hoc," which means assuming that the subsequent change in the placebo group extracted from each group was due solely to the placebo effect. Besides regression to the mean, there are additional explanations for the before-and-after difference, such as the bias of knowing one is being observed (the Hawthorne effect), the Pygmalion effect, and the "desire to improve," which could have existed differentially in each individual experiment.
By the way, the bias represented by this study is called in Latin, "post hoc ergo propter hoc," which means assuming that the subsequent change in the placebo group extracted from each group was due solely to the placebo effect. Besides regression to the mean, there are additional explanations for the before-and-after difference, such as the bias of knowing one is being observed (the Hawthorne effect), the Pygmalion effect, and the "desire to improve," which could have existed differentially in each individual experiment.
Jairo
The placebo/nocebo effects have been demonstrated in many studies over the years. Frankly, I don't see where a non-placebo arm for a study would add to the understanding. These effects, by definition, always apply to subjective end points. Psychiatric disorders are probably the last area I would explore to explain these very common phenomena. Most physicians with experience would agree that one's health and the way one feels are two different things. Obviously they are related but the reasons for the differences are speculative and will probably always defy scientific explanation. As cogently explained in the comment below by M. Makous, the whole point of the placebo is to remove these poorly understood psychological factors from consideration in evaluating a treatment or procedure.
Excellent analysis, Dr. Mandrola, of a study that shows how statistics, when used with bias, can violate the principles of the epistemology of science and convince the unwary, that is, almost all readers of biomedical science. Discussing a study on efficacy and safety, inferences about the null hypothesis of no difference between two groups, which would indirectly test the hypothesis about the intervention's mechanism of action (in this case, a psychiatric medication), and in which the placebo is the "control," can only be extrapolated to the concurrent comparison. Conducting a meta-analysis of the "before and after" of the experimental placebo groups is an interesting way of statistical manipulation that violates principles of causality and is a "bocatto di cardinale" for detractors of psychiatric medication.
For you and your readers, I would like to share this: https://bit.ly/Echeverry_J_A_FINAL_Faslehood_origin_diabesity_pandemic
Don't most psychiatric trials involve at the least substantial interaction with clinical staff, and often therapy as well? It makes sense to compare placebo to active compound in this context, but why would you look at placebo effects in isolation when other treatment (even just sustained caring attention from someone with authority) is involved? You wouldn't expect cholesterol to respond to being measured by a nice person every week, but it's weird to call that a "placebo effect," the implied effect of nothing, when treating a socially mediated condition like depression. The ordinal ranking of strength of treatment without active medication by condition (depression, GAD and panic disorder up top, mania, OCD and schizophrenia at the bottom) has more validity, though. (Someone recently suggested in a comment thread that perhaps the efficacy of benzodiazepines in catatonia is due to placebo effect, which was a little funny but made me reflect on how misleading it can be to generalize about psychiatric conditions in public-facing communication.)
In actual practice, the researchers must inform the subject "You have a 50-50 chance of getting a placebo v the real drug. The placebo is a dummy pill with no medication.. Neither of us knows which one you are taking." Thus, the researcher has undermined the very point of a placebo, which is to fool the recipient. In a placebo controlled study, the point of the placebo is to REMOVE the placebo effect from the active medication, not to study the placebo effect itself.
A true placebo has to fool the patient: "This pill is the latest medication with powerful effects that studies have shown to be particularly effective in treating your [condition]. It is quite safe with no known side effects or interactions." The prescriber should have much gravitas with a confident reassuring tone. This bold falsehood is at the heart of a placebo, and could not pass an ethics board. Hence, we never see a bona fide medical study on the placebo effect.
(In my understanding, the essence of social psychology is a combination of placebo/nocebo effects, so there are probably lots of non-medical studies looking at some form of placebo and their effect on social behavior.)
When you say “Another reason you need non-placebo arms is that simply being on a tablet can cause affects in the body.”, I think you mean “effects”, not “affects”. Then again, in talking about psychiatry, maybe you do mean affects”.
This editorial and Dr Murphy’s excellent article are must-reads. They also teach why the widely used change from baseline in treated patients does NOT measure patient response to treatment.
Interesting read! However, isn't this only true if we are tracking pre and post scores as opposed to tracking changes over time? So, for example, suppose we track depression scores in a placebo arm and treatment arm starting at 3 weeks before treatment, and then 4 weeks post, would this not also solve the issue?
If you knew the regression to the mean time, on average, for each condition tested, yes (null is true vs false). However, to quantify the absolute effects, you need a no placebo arm.
Hmm, that's a good point, though I'm not sure you have to have regression to the mean time... As long as I have multiple data points pretreatment (so that I can establish a reliable baseline), don't I only need to show that on average, patients were stable before intervention-->improved on intervention-->regressed post? Also, if this is the case, why can't I look at average score pre-intervention vs avg during placebo treatment and quantify effect that way?
A study long ago by C Edward Davis et al showed that if a patient passes a triple screen (3 successive clinic visits) for LDL cholesterol > 190 there is still regression to the mean with a surprising proportion of patients having LDL < 190 immediately before randomization to a cholesterol lowering drug.
Awesome work, now look at how "placebo" arms are used in vaccine trials...
You must have read Turtles All the Way Down....
The SAMSON trial is interesting. “Each day the patients recorded how they felt on a smartphone app.”
How did they verify it? Was it the patient who operated the smartphone? Or their kids? Or a cat?
Also… Are (all) trial subjects pre-warned about placebos? If they are, and they sign the paper to this, the very fact of knowing that they will be given fake stuff now and then may affect their physiology (just imagine the anger).
If they are not informed, they probably know this from other sources (the internet and family). So they know that they will be given fake stuff and they know that they were not told so - thus, lied to twice. Just imagine the anger.
How is this accounted for in trials?
Study participants consent to being randomized to a treatment condition. They are aware they may be getting a placebo. No one is forced to participate and no one is lied to.
Whatever the participants know (whichever way) will activate biochemical processes within their bodies. Some may have hope, some may be indifferent (or giving up), some may feel that they deserve to get the drug and not some placebo (that’s the perceived lie I referred to). High hopes, despair or other feelings may be perceivable in bio-tests.
So, a whole range of reactions / processes are dismissed, even if they may prove to be decisive for the outcome. This is what probably plays strong in “spontaneous” self-curing cases. This is what certainly is decisive in tribal ceremonies. This is crucial in psychology / psychotherapy. All three areas are not rare or unusual, so it’s baffling why the mainstream medicine won’t look into it at all.
The effects of psychological expectation are the primary reason we use placebo arms in the first place. Because the participant doesn't know which condition they've been randomized to, in a large enough sample those effects will, on average, be the same between conditions. Ideally the only difference in the experimental protocol will be whether they've received the intended treatment or the inactive one. In a double-blind procedure, the experimenters don't know which participants are in which condition until after the data has been collected, removing the impact of their expectancies on measurement. This is how experimental medicine works, and there are lots of great resources that can get you up to speed if you are interested. I don't know why you think that the effects of expectation are being ignored by mainstream medicine when then are very specifically being addressed by placebo controls.
(I have invariably found that when someone says "mainstream science/medicine refuses to look into x" they have not checked whether this is true, because there are always plenty of papers on the topic, often from diverse disciplines, some explicitly taking the perspective the commenter is arguing from. Here's one for you, Dan: https://www.acpjournals.org/doi/abs/10.7326/0003-4819-136-11-200206040-00011)
Placebo eliminates only one effect: yes or no guessing. But it does not handle any emotions related to the situation of the subjects, their talks with family members, swings of emotional states, and so on, all I wrote about.
All these emotional and psychological factors are different for different subjects - which brings in a storm of uncontrollable and undeterminable effects which are completely neglected by the protocol. Metaphorically, every subject is “contaminated” with emotional burdens which may and do affect their condition and the outcome of the trial. We just don’t admit it.
However, since these are subjective experiences, no-one can really quantify them, and they will remain that extra factor which may trigger curing or contribute to worsening the condition regardless of the value (or its lack) of the drug or therapy.
My point is that by omitting the human factor we are still in the blind. We only think that double-blind is objective, while it is far from it. I would say that placebo and randomization are only an elegant excuse not to look into the subjective nature of curing, spontaneous curing or self-curing - including its effectiveness. That’s obvious: if we examined it, and found it statistically significant and medically important… why manufacture any drugs? Why educate doctors? Why invent extremely expensive technology? No more patents, no more royalties.
BTW. All trials are sponsored, i.e. funded by particular economic interests. How does this affect the course of the trial (including the selection of subjects)? Shouldn’t we double-blind the funding mechanism? So that the center does not know whether they get any money - ever - for their work?
A great contribution about something often misunderstood !