Enjoyed the conversation. Especially the example citations of DANISH trial and the zodiac sign-aspirin thing.
I think the most puzzling and frustrating piece of element in a clinical research protocol is sample size estimation. Everyone wants it, but very few understand it, and those who understand it do it with a lot skepticism. It is al…
Enjoyed the conversation. Especially the example citations of DANISH trial and the zodiac sign-aspirin thing.
I think the most puzzling and frustrating piece of element in a clinical research protocol is sample size estimation. Everyone wants it, but very few understand it, and those who understand it do it with a lot skepticism. It is also frustrating to see papers getting published with very small sample sizes, but when it comes to our own paper submissions, the editor or the peer-reviewers reject it saying you don't have "enough" sample size. Science is also unjust and unfair just like politics in many cases. "All animals are equal, but some animals are more equal" works in science too.
By the way, I don't think I agree quite with Ben on the explanation regarding 95% CI. Perhaps I didn't fully get it, but I am sure there are much more intuitive ways to explain it.
Also, with regard to power, sample size and statistical signficance testings, I really like this Neyman-Pearson approach for its intuitiveness.
Enjoyed the conversation. Especially the example citations of DANISH trial and the zodiac sign-aspirin thing.
I think the most puzzling and frustrating piece of element in a clinical research protocol is sample size estimation. Everyone wants it, but very few understand it, and those who understand it do it with a lot skepticism. It is also frustrating to see papers getting published with very small sample sizes, but when it comes to our own paper submissions, the editor or the peer-reviewers reject it saying you don't have "enough" sample size. Science is also unjust and unfair just like politics in many cases. "All animals are equal, but some animals are more equal" works in science too.
By the way, I don't think I agree quite with Ben on the explanation regarding 95% CI. Perhaps I didn't fully get it, but I am sure there are much more intuitive ways to explain it.
Also, with regard to power, sample size and statistical signficance testings, I really like this Neyman-Pearson approach for its intuitiveness.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4347431/