Skip to content

Evaluation

Investing in what works is critical. We help you iteratively improve solutions and evaluate how effective they are.

Find out what works, when and for whom

A key part of BIT’s work is running robust evaluations to measure and understand the impact of our solutions. Our team has extensive expertise in randomised controlled trials (RCTs), the ‘gold standard’ in evaluation methods. We have conducted hundreds of evaluations with government, nonprofit and private sector clients worldwide, from piloting small interventions in schools to running large scale RCTs with millions of taxpayers. 

Whenever feasible, we recommend generating evidence from quantitative and qualitative research, such as user testing and process evaluations. Merging these provides a holistic understanding of an intervention’s effect.

The results we’ve achieved have been published in leading academic journals, including the Lancet, Nature and the Journal of Public Economics.  

In addition to evaluating interventions that we’ve designed with our clients, we also provide standalone evaluation expertise to assess the work of others.

As independent evaluators, we can help you design the most robust and feasible evaluation to generate the evidence you need – whether it be to gather feedback during the early development of a new product or to determine the impact of a new programme.

Early development of a new programme or product

We help programme creators design, develop and gather feedback on their new initiatives with qualitative research and design tools, such as user journey mapping, co-design and prototyping. This process ensures that solutions in progress are fine-tuned for the audiences they’re targeted for.

Early implementation: Pilot evaluations and feasibility studies

Once an initial version of a programme or product has been developed, we run pilot evaluations to test its implementation on a small scale. With pilot evaluations, we gather feedback on a programme’s feasibility, acceptability and evidence of promise. 

We also conduct feasibility studies, which can be a crucial step before measuring impact. These assess if running an impact evaluation is viable, and help us recommend practical tweaks to intervention delivery to make it more suitable to impact evaluation.

Testing for impact in the real world

Ultimately, we want to know if the solutions we design actually make the intended change. 

We have a particularly strong track record designing and running RCTs. We have conducted more than 800 of them, from large national studies to hyper-local ones. These trials give our clients the best sense of an intervention’s impact. 

We are experts in a range of other quasi-experimental methods as well, such as difference-in-difference designs, synthetic controls, regression discontinuity designs and propensity score matching. We also have a growing practice in theory based evaluation.

To maximise learnings further, we complement impact evaluations with mixed methods implementation and process evaluations. These generate evidence around why something worked or not by examining different factors. They allow us to study aspects around the quality of a programme’s implementation, understand how and why user groups may have responded differently and pinpoint impactful elements of a programme.

Testing for impact in a simulated environment

In business and government, speed is often of the essence. But conducting an evaluation before scaling is highly valuable – it can save time and resources in the long run. That said, we also realise that not all policies or ideas can be tested in advance in the real world.

We have developed a range of tools to help our partners rapidly evaluate interventions in a safe, controlled environment before they are launched more widely. For example, we use BIT’s in-house testing lab, Predictiv, to conduct sophisticated online trials where we replicate the real world in a simulated setting. This gives our clients quick, cheap data to inform intervention elements before launching to millions of users.

Towards impact at scale: formative scaling evaluations

A lot can change when you start to scale. Including an evaluation alongside national delivery can provide valuable insight on what’s shifting and why.

Our team can also assist with formative scaling evaluations. These help our clients understand and compare how delivery is working across different contexts, determine if programme quality is being maintained and identify challenges to scaling effectively. We turn these insights into recommended iterations on the larger scale model to help ensure its widespread success.

Talk to us

Get in touch to discuss how our expertise in evidence-led problem solving can help you.

Get in touch

Design and development by Soapbox.