Skip to content
Menu
  • Blog
  • 27th Jul 2020

Do nudges actually work?

Last year, we were sent a request that was intriguing, and a bit scary. 

At BIT we spend a lot of time setting up Randomized Controlled Trials and other ways of evaluating impact reliably. We really care about finding out whether what we’re doing “works” – and where, when, and for whom. Over the years, this means we have built up a collection of results from hundreds of experiments, each of them recorded clearly in a standard format.

The request, from academics at University of California Berkeley, was pretty simple: can we analyze your whole database of results in North America and see what impact you’ve had overall? 

The intriguing part of this request was the possibility of seeing trends and patterns in the results: were certain kinds of interventions working better? Were we seeing greater impact in some policy areas rather than others? These kinds of insights could really help us and others do things better.

The scary part was: what if this analysis shows that we’ve had zero impact overall? Or, even worse, a negative impact on outcomes? This wasn’t our impression, but we also know that our impressions can be deceptive, and a systematic look at data can tell a different story. 

Given our commitment to data and evaluation, we agreed to the request – as did the Office for Evaluation Sciences, the team in the US federal government that uses a similar mix of behavioral science and RCTs.  

The results have been published today as an NBER Working Paper. The authors crunched the numbers on 165 trials testing 349 interventions, reaching more than 24 million people. These involved all levels of government and covered many policy areas, such as health, education, tax compliance and uptake of benefits. Some examples include: encouraging more diverse applications to police departments, increasing engagement with a free primary care service for lower income residents of New Orleans, and raising college enrollment among veterans. 

The analysis shows a clear, robust positive effect from the interventions. On average, the projects produced an average improvement of 8.1% (from a baseline 17.2% to 18.6% with interventions) on these key policy outcomes.

There are good reasons to be confident in this result. First, the authors had full access to all the relevant data from both organizations – both published and unpublished studies were included. That means this result is not affected by ‘publication bias’, where only the studies that get results are published.

Second, the large number of trials, and the range of policy areas they cover, gives more confidence in these results. Of course, that would not be true if they were all badly designed. But the authors conclude that the trials had more than big enough sample sizes to give reliable results; in fact, they had better statistical “power” than comparable academic studies. And the use of randomization gives a strong causal link between intervention and outcomes. 

The other big point to note when thinking about the results is cost. Most of these studies were also cheap to implement – many of them involved modifying existing practices such as wording a letter differently or redesigning a form. Interestingly, the study finds that the lowest cost interventions did not have a smaller effect on outcomes. 

So, yes, most of these projects would be considered “nudges”. During the period in question, BITNA was delivering on the What Works Cities program, funded by Bloomberg Philanthropies, that was particularly interested in running low-cost experiments. But we want to stress – and always have done – that our work goes wider than nudging. 

As we have argued before, we see that behavioral science has the potential to improve the way that regulatory systems are set up or the way legislation is drafted. While it is great that an independent analysis has shown that low-cost nudges work, we know there is more to do. 

In fact, I explore this potential and the way behavioral insights can grow in the future, with my colleague Elspeth Kirkman in a new book from MIT Press, called “Behavioral Insights”. One big priority we highlight is the need to consolidate what findings we can rely on, where the gaps are, and what size of improvement it’s reasonable to expect. We see this study as a major step towards that goal.   

Behavioral Insights by Michael Hallsworth and Elspeth Kirkman is published by MIT Press on 1 September. You can pre-order a copy on Amazon here.

Authors