I have never been a big fan of the term "statistically significant" and I think the two studies listed in this article justify that skepticism. The second one showed incidences of 3.9% and 4.6% for the drug and control groups respectively. This was followed by a statement that "the 0.7% absolute risk reduction met statistical significanc…
I have never been a big fan of the term "statistically significant" and I think the two studies listed in this article justify that skepticism. The second one showed incidences of 3.9% and 4.6% for the drug and control groups respectively. This was followed by a statement that "the 0.7% absolute risk reduction met statistical significance". The first one showed incidences of 7% and 10.5% followed by "the 3% ARR after 2 years was highly statistically significant". This one included a graph that showed what looked like a large gap between the two lines that measured incidence of death over time. But the graph had a y axis of only zero to 14%. If placed on a graph with the y axis from zero to 100%, the two lines would appear to be almost identical. This method of deception was described in the classic text How to Lie with Statistics many years ago. The use of statistical significance certainly serves the needs of pharmaceutical companies and manufacturers of medical devices to justify the use of their products. And I am sure that researchers in need of published articles in order to advance their careers find it useful. But as a criterion for clinical decision making, it is of little, if any, use.
I have never been a big fan of the term "statistically significant" and I think the two studies listed in this article justify that skepticism. The second one showed incidences of 3.9% and 4.6% for the drug and control groups respectively. This was followed by a statement that "the 0.7% absolute risk reduction met statistical significance". The first one showed incidences of 7% and 10.5% followed by "the 3% ARR after 2 years was highly statistically significant". This one included a graph that showed what looked like a large gap between the two lines that measured incidence of death over time. But the graph had a y axis of only zero to 14%. If placed on a graph with the y axis from zero to 100%, the two lines would appear to be almost identical. This method of deception was described in the classic text How to Lie with Statistics many years ago. The use of statistical significance certainly serves the needs of pharmaceutical companies and manufacturers of medical devices to justify the use of their products. And I am sure that researchers in need of published articles in order to advance their careers find it useful. But as a criterion for clinical decision making, it is of little, if any, use.