Skip to content
Menu
Blog

What do evidence and olives have in common?

5th Jul 2018

People can learn in a number of ways – through direct instruction, such as early lessons from parents; through watching what other people do and learning from their actions; or through our own experiences and trial and error.

Psychologists have long studied the way in which we learn from our experience as well as from others, and the way in which we take cues from our environments about the right way to behave. Kees Keizer and colleagues found that when people were in an environment that contain ‘disorder’ – graffiti – they were more likely to litter than if they were in an tidy and clean environment – learning from the cues around them.

Very often, our learning process is neither purely individual not purely social. For example, one of us is fond of olives, and the other dislikes them intensely. The prevailing wisdom (albeit not tested with an RCT), is that olives are an acquired taste – that on initial exposure, nobody much likes them, but we grow to love them over time. The initial exposure is social – if our parents introduce us to them, if we grow up in an olive rich environment – but acquisition of the taste afterward is more personal.

Might the same thing be true of evidence? New research by Alex Kroll from Florida International University, and Don Moynihan, from La Follette School of Public Affairs, suggests so.  They look at the efforts by the Bush and Obama administrations to increase two forms of evidence in government – programme evaluation (to determine whether an intervention works), and performance management (using data to monitor how a programme is performing). Take-up of both of these has been slow, with the initial conclusion being that the both efforts were initially a failure. However, looking at longitudinal data, the authors find that the two programmes did become effective and more widely embraced over time.

They also find a significant complementarity between the two forms of evidence – so that managers who have been involved in a programme evaluation are more likely to make use of performance management data afterwards, due, the authors argue, to managers being able to better understand the relationship between sometimes abstract data and the reality on the ground of the policies they are responsible for.

If this is true more generally, it suggests that we’re likely underestimating the benefits of any particular evaluation, as the act of performing an impact evaluation changes the culture around evidence and data and has downstream consequences for their wider and better use.We might be seeing this in education, in the form of the ResearchEd movement – a series of incredibly popular conferences about evidence in education, which tens of thousands of teachers give up their free time to contribute to and attend. The first ResearchEd conference was held in 2013, two years after the establishment of the Education Endowment Foundation which has brought randomised trials directly into thousands of schools in the UK – maybe the one stimulated demand for the other? If bringing evidence into a field stimulates demand for it, that’s something to get excited about.

Authors