Abstract
Randomised trials in education research are a valuable and increasingly common part of the research landscape. Choosing a sample size large enough to detect an effect but small enough to make the trial workable is a vital component. In the absence of a crystal ball, rules of thumb are often relied upon. In this paper, we offer criticism for commonly used rules of thumb and show that effect sizes that can be realistically expected in education research are much more modest than studies are powered to detect. This has important implications for future trials, which should arguably be larger, and for the interpretation of prior, underpowered research.