Dr Patrick Taylor
Principal Research Advisor
At BIT we’ve been doing a growing amount of work in recent years to support third sector organisations and funders to develop their programmes, particularly in the fields of social action and social capital. This often involves helping organisations to develop theories of change and do robust evaluations.
In the course of our work we keep finding two misconceptions about programme design and evaluation in the third sector. Addressing these misunderstandings and getting past them will, we feel, both improve the quality of programme design and evaluation and have the added benefit of potentially improving the funding practices that support the third sector.
The misconceptions are:
Qualitative personal accounts are a different type of information to quantitative data. As different types of information, they make very different contributions to a project.
Personal accounts are particularly helpful when doing exploratory research. Before designing an intervention, we often ask: “What is the nature of the challenge we’re trying to address?” Answering this question almost always requires asking the people that experience the challenge in question, as well as looking at data.
For example, when coming up with ideas to help increase the number of students who pass their GCSE resits, we spoke to students who were trying to pass and teachers who were trying to help them. Personal accounts can also be valuable in pilots, when we are looking for evidence of promise, feasibility and readiness for scale. We also seek lived experiences when trying to understand how an intervention works (or what stops it working), during the implementation and process evaluations that we do alongside RCTs.
However qualitative personal accounts cannot produce unbiased estimates of the causal effect of a programme or service. Only quantitative data can do that, but crucially, not on its own. Rather you need to find a quantitative indicator that is a good proxy for the change that you care about (not always a trivial matter), and your data really needs to be collected in the context of an RCT or a quasi-experiment.
Not realising this last point, funders often ask for quantitative data that is actually unhelpful. This happens most commonly in the form of a ‘pre-post’ outcome survey. This type of evaluation can be both costly for organisations and disruptive for service users, yet rarely produces any useful information beyond showing promise under some strong assumptions that are often unlikely to hold.
For example, a recent RCT of a programme that sought to reduce high intensity use of emergency departments in the US found no effect of the intervention compared to the control group. If the evaluation had been conducted using a simple pre-post design it would have incorrectly concluded that the programme reduced readmissions by 38 percentage points.
Where qualitative methods are sometimes used to answer the wrong question, RCTs are sometimes narrowly applied to a single type of question. RCTs are not just good for proving the impact of a particular programme, they can also be used to test different variants of an activity against each other to find out which one works best. This can be done to test new administrative processes or to test ideas for improving a programme itself.
For example, for the last three years, we’ve worked with the Raspberry Pi Foundation, running rapid RCTs to help them establish whether they should make calls or use emails when trying to establish their new Code Clubs and whether they can make the application process easier for their volunteers. We found that calls were indeed more effective and that changing the emails sent to volunteers increased applications by a significant 6%. We think there is lots more scope to use these robust research methods to test ways of improving delivery.
Some sectors are further ahead than others when it comes to these two ideas, and R&D in general (education in the UK and international development come to mind). To move things forward in the third sector, funders and policy makers need to provide the appropriate conditions for good R&D to thrive. In particular they could:
Principal Research Advisor
Principal Advisor