The probability of detecting a true effect in an experimental study.
Clinical trials involve a subset of a population of interest. Results are then extrapolated to the population of a whole. However, this raises the risk that the effects seen in the trial are not representative of those that would be seen in the whole population. This risk is greatest when sample sizes are small. Power analysis is used to provide a measure of confidence in results and to ensure the statistical validity of extrapolation to larger populations.
Power analysis is a key element of study design, particularly to identify appropriate sample sizes (in clinical trials, how many people need to be recruited). As recruitment can be challenging, there is an incentive to minimize the number of participants in a study. However, with small numbers, it can be difficult to be confident that observed effects provide a reliable indication of an intervention’s effects.
A second key factor affecting the power of a study and sample sizes is the effect size that indicates a clinically meaningful change and can be used to judge whether an intervention has demonstrated efficacy. Detecting a small change will require large numbers of participants to provide good statistical confidence.
Power analyses are used to provide an estimate of the likelihood (expressed as a percentage) that a particular effect size will be seen for a particular sample size. In practice, preferred likelihoods and effect sizes are selected in order to identify appropriate sample sizes (e.g. a 95% probability of detecting a 20% decrease in an outcome measure of interest).
If a study includes too few participants for statistically robust conclusions to be drawn, it is described as underpowered.