Skip to content
Menu
  • Blog
  • 15th Mar 2022

Behavioural Science or Bullshit?

There is a problem with behavioural science. It’s not about a particular campaign, or about whether ‘behavioural scientists’ have been given too big or small a role over recent years. Rather, it’s this:

  1. There is ‘good’ behavioural science – based on academic theory, methodologically robust, predictive, and relevant to policymakers making difficult decisions.
  2. There is bad ‘behavioural science’ – strong opinions, based on gut feeling and small, mainly qualitative, studies like the occasional focus group.
  3. It can be annoyingly hard for policymakers and the public to tell the difference.

You’ve cut your arm. It’s a deep gash. You go and see your doctor. They look carefully at your wound, and suggest that you find a fresh cow pat and rub it in.

Chances are you won’t follow this advice. But why not? 

First, you probably have some basic understanding about infection and alternative treatments that this advice doesn’t fit with. Second, you are pretty likely to ask your doctor why they think this will work. If the answer starts with ‘there’s a new study in the Journal Lancet that…’ you might keep listening. But if the answer is closer to ‘my grandmother grew up on a farm and she always said…’, you may think that there’s no need to hear anymore. In other words, with respect to medicine we are used to asking about the evidence that lies behind a claim – and that’s a really good idea.

But when it comes to behaviour, both checks often fail. Perhaps surprisingly, our common intuitions about behaviour – our thinking about thinking – are often wrong. We mispredict what people (including ourselves) will do. Second, people rarely ask about the evidence behind given claims, nor weigh them according to the strength of that evidence.

Covid-19

The last two years have tested our institutions and knowledge on many fronts. The role of behaviour in the spreading, suppression and even treatment and prevention of Covid has been obvious for all to see. It’s also provided a rich variety of both ‘good’ and ‘bad’ behavioural science, and we – policymakers, experts and the public – should seek to tell them apart.

First line of defence: everyday public health behaviours

From right at the start of the pandemic, the public were asked to change their behaviours to slow the spread of infection. As later captured in a phrase: ‘Hands, Face, Space.’

So how to encourage millions of people to wash their hands more often? Well you can buy a load of posters and advertising space, but what should you say? What image might you add? What logo should you attach? And would any of it make any difference to the number of people who wash their hands, and how often they do it?

We had strong hypotheses for all of these questions, based on a wide body of academic literature. However, we were dealing with a completely new and unknown context so the only way to know whether our intervention designs were effective was to test them.

Working with the Department of Health and Social Care in the UK, BIT ran a series of rapid trials with thousands of people to test variations of messages, images, and formats of handwashing posters. Just like when seeing a poster on the underground or seeing a passing image online, subjects only looked at the image for a few seconds. We then tested their memory and comprehension of the message. After several iterations, we had refined the design of the posters. The messages were shorter and more direct. The images used were simpler. The clutter was removed. Most importantly, our trials showed that the message was much more likely to be understood and correctly recalled.

The government guidance was asking people to stand two metres apart – but how far apart do people actually stand if you ask them to do this? And what if you said one metre? Cue a trial. 

And what about masks? How much ‘splatter’ do you actually get at one or two metres when you ask people to wear a mask and talk, sing, or shout? What about if you asked them to use a homemade mask? And what does wearing a mask do to other behaviours? Does it make people less cautious, or does it ‘nudge’ them to be more careful, and to maintain distance (the latter it turns out…)? Cue another large-scale randomised control trial to test intervention ideas that were grounded in strong theory but as yet untried. 

This is what good behavioural science looks like. 

But there were also plenty of examples where that didn’t happen. A creative agency on a roll and in a hurry. A single focus group where participants did or didn’t like an image. Cue: ‘I like this one’; ‘don’t we just need to scare people?’; or slogans like ‘Stay Alert!’ (…and do what?? For what? What are you asking me??) 

‘Intuition’, creativity, and small-group qualitative work can play a pivotal role, especially in the exploratory phases of policy or practice development. But it is essential that it then be coupled with robust, systematic quantitative testing before leaping to national policy or campaigns. 

Second line of defence: testing and isolation

As the pandemic took hold, and new systems built, new behavioural challenges were all around.

How to encourage people to get tested? How can a contact tracer help people remember who they saw and where they went? How best to encourage someone to isolate at home, with all the frustrations and inconveniences involved? Are there ‘Covid-secure’ – or at least safer – ways of running larger events or businesses?

These are practical but pivotal questions – and ones to which the answers aren’t obvious. Borrowing practises from countries that seem to be doing a good job is a good place to start, such as the care packs you’d be sent in South Korea when asked to isolate (thermometer included), or the contact tracing app that Singapore was early to develop. But there are so many such operational and policy questions, and even where something worked elsewhere, would it work in another country and context with many other differences?

Once again, good behavioural science – like all science and good operational practice – twins humility (what are the limits of what we really know) with fast but robust experimentation. For example, a key question for the UK Test and Trace system was whether to text or to call people to encourage and remind them to maintain their isolation. Texting is cheaper, can be done in larger volumes, and with the content more precisely controlled. Calling people was more expensive, more intrusive, but potentially more ‘human’. We worked with Test and Trace to run a large scale RCT to see what actually worked. The result? Texting did not seem to boost isolation, but calling did. 

Similarly, there was some seriously good work organised between the Chief Scientists at DCMS and BEIS with SAGE backing to identify ways that larger cultural and sporting events could be run in ways that were ‘Covid-secure’. These involved serious methodological challenges, and the requirement to combine quant and qual methods, and pragmatic experimental designs.

Such systematic testing can produce major gains, but you still have to weigh up the costs and benefits. There are situations where trial results are appropriately overruled, such as if calling people proved too expensive for the gains achieved (it wasn’t).

But there are occasions where good behavioural science is trumped by bad behavioural science. For example, a major concern during parts of the pandemic was that people with other health conditions were not seeking treatment. We ran a large trial to test which images and messages were more effective at addressing this. We found the most effective content included the face of someone in pain, with an accompanying message to encourage them to seek treatment if they felt unwell. But, on the back of a focus group, a decision was taken to instead run with an image of a confident, smiling nurse. 

If you have to choose between advice based on an RCT with several thousand people, and a focus group where a few people preferred the smiling face, go with the RCT. Seriously. 

Third line of defence: vaccination and treatment

Let us celebrate the astonishing achievement and speed with which the medical community developed, and then deployed, vaccines. But having a vaccine doesn’t mean it ends up in arms. 

When you do finally get a message that your vaccine is ready, does it make a difference what the message says? Of course. And how would we find out which messages work best? You guessed it: run a trial.

In trials in US cities, we found large differences in what messages made people likely to take up the offer of a jab. In a trial with 20,000 people we found that the most effective message tapped into people’s desire to protect and support their friends and family – Your loved ones need you. Get the COVID-19 vaccine to make sure you can be there for them. 

Finally, it’s worth noting that good behavioural science can also stop us doing things that don’t work. 

As vaccine rates slowed, many countries turned to using financial incentives to encourage the remaining ‘hesitants’ to get their jabs. It is a great example of a ‘common sense’ action. Lots of people have got the jab, but for those who are wavering, let’s just give them an extra little encouragement.

Early field studies from across the world seemed encouraging, with a cash incentive getting a precious few extra percent of people to get jabbed. In Sweden a small monetary incentive (~£17) increased vaccination rates by about four percentage points. But when we tested a range of different incentives in the UK context we found that the story was more complex. Financial incentives could indeed encourage people who were already intending to get a jab to bring it forward, but it tended not to work or even discourage people who worried about the jab or were ‘disengaged’. 

Even worse, we found that incentives had a major ‘backfire’ effect across all groups in terms of whether they would get a follow-up, or booster, jab. In essence, financial incentives shift people from getting jabs because they are good thing to do, for them and the community, into something they did for money (and with more suspicion!). Short-term win, and medium term big loss – and extra cost.

Conclusion: methods matter! (don’t confuse bad with good)

There are important, and specific controversies around public behaviour and specific behavioural interventions. Should ‘fear’ be used? Should vaccinations be mandated in healthcare workers? Should we pay people to get jabbed?

Many such proposals can be thrown out on empirical grounds alone. For example, fear-based campaigns are generally NOT effective! More important is to direct people’s attention towards what they can do to reduce their risk (as set out in evidence I, and Professor Stephen Reicher, gave to Parliament back in January last year). In other words, increase people’s efficacy, not stoke their anxiety. In the case of Covid, that means go with ‘Hands, Face, Space’ or Japan’s 3Cs rather than ‘Stay Alert!’ or ‘Panic!’

Of course, just because something works, that doesn’t necessarily mean we should do it. Politicians and society need to make that call. 

But let’s at least try and distinguish one kind of BS from another – behavioural science from bullshit. Or indeed science from opinion in general. Next time you hear about an interesting and eye-catching idea about how to change behaviour, recognise it for what it is. A claim. Then ask ‘what’s it based on?’

Recognising claims, and probing the robustness of the methods behind them, is the key to being a better bullshit detector. And it’s a skill we can learn, and even teach our kids.

And well spotted. That’s a claim. And yes, there is a study to back it up…

Authors

Want to learn more?