The use of statistics in randomised controlled trials is central to the whole exercise and it is a terrible waste of everyone’s time when this is neglected. This timely paper in the BMJ id worth the attention of anyone undertaking such trials.
The importance of sample size determination in randomised controlled trials has been widely asserted, and must be reported in published articles. An a priori sample size calculation will determinate the number of participants needed to detect a clinically relevant treatment effect.
The conventional approach is to calculate sample size with four parameters: type I error, power, assumptions in the control group (response rate and standard deviation), and expected treatment effect. Type I error and power are usually fixed at conventional levels (5% for type I error, 80% or 90% for power). Assumptions related to the control group are often prespecified on the basis of previously observed data or published results, and the expected treatment effect is expected to be hypothesised as a clinically meaningful effect. The uncertainty related to the rate of events or the standard deviation in the control group and to treatment effect could lead to lower than intended power.
Charles and his colleagues assessed the quality of reporting sample size calculation in published reports of randomised controlled trials, the accuracy of the calculations, and the accuracy of the a priori assumptions.
In this survey of 215 reports published in 2005 and 2006 in six general medical journals with high impact factors, only about a third (n=73, 34%) adequately described sample size calculations-that is, they reported enough data to recalculate the sample size, the sample size calculation was accurate, and assumptions in the control group differed less than 30% from observed data. This study raises two main issues. The first is the inadequate reporting and the errors in sample size calculations, which are surprising in high quality journals with a peer review process; the second is the large discrepancies between the assumptions and the data in the results, which raises a much more complex problem because investigators often have to calculate a sample size with insufficient data to estimate these assumptions.
Reporting of the sample size calculation has greatly increased in the past decades, from 4% of reports describing a calculation in 1980 to 83% of reports in 2002. This review highlights that some parameters for sample size calculation are frequently absent and that miscalculations occur.
They also found large discrepancies between values for assumed parameters in the control group used for sample size calculations and estimated ones from observed data. Assumed values were fixed at a higher or lower level than corresponding data in the results sections in roughly even proportions, a finding different from the results of a previous study.
These results suggest that researchers, reviewers, and editors do not take reporting of sample size determination seriously. An effort should be made to increase transparency in sample size calculation or, if sample size calculation reporting is of little relevance in randomised controlled trials, perhaps it should be abandoned, as suggested by Bacchetti.
. After years of trials with supposedly inadequate sample sizes, it is time to develop and use new ways of planning sample sizes.
Charles et al 2009 Reporting of samle size calculation in randomised controlled trials: review. BMJ vol 338, pp 1256-1259
- Martin Eastwood