Skip to content
Menu
  • Blog
  • 13th Mar 2024

Growth Vouchers: Introducing Randomised Control Trials to Government Policy

The Centre for Economic Performance at the LSE recently published the results of a randomised control trial to evaluate the impact of business Growth Vouchers. BIT was involved in the design of this trial from the very beginning and here we reflect on the importance of RCTs, why this one worked, and offer some challenges for the future. 

A decade ago, we were gathered in the pillared room of 10 Downing Street for the launch of the UK government’s Growth Vouchers scheme. There was a speech by the then Minister of State for Business and Enterprise, Matt Hancock, along with the Prime Minister’s adviser Lord Young. In fact, as we reveal at the end, the launch wasn’t quite what it seemed. 

But back to why we were there. The argument for Growth Vouchers was simple. Many UK firms sat in a long tail of low productivity, yet rarely sought advice on how they might improve their performance. The scheme therefore offered vouchers of up to £2000 for firms to get that advice, with the hope these would boost their growth. The programme also established a marketplace where businesses could exchange reviews on advice providers. The hope was that this would create incentives to provide high quality advice. 

The Minister was open and honest that we didn’t know whether the scheme would work, so with this in mind, it was to be delivered as a large scale randomised control trial (RCT). Ten years on, we finally have the answer (spoiler alert: it worked, at least to a point). It is also a story about how hard it is to get RCTs done in government – and why it is the right thing to do.

The Growth Vouchers scheme 

The Growth Vouchers scheme was one of the very first RCTs that the UK government ran – and they are still very rare across government policy. BIT worked closely with the then Department for Business, Innovation and Skills (which later became BEIS and is now DBT) to develop the trial design. Running high quality evaluations can be hard, and RCTs (widely seen as the ‘gold standard’ of establishing causality between an intervention and an outcome) are often put in the ‘too hard’ bucket. They generally require large sample sizes. They require randomising end-users into those that get the support and those that don’t. They require high quality data collection. Most of these things need to be designed in from the start of the programme. 

Last month, the Centre for Economic Performance (CEP) at LSE published results from the overall trial – and credit is due to Prof Henry Overman for getting the analysis finally published. 

They found the scheme resulted in an 8% increase in business turnover in the first year, corresponding to an increase of £73,120 per business. In comparison, the average amount claimed per business via the vouchers was £1,714. A substantial effect size, and an impressive return.

That said, this increase in turnover disappeared by the second year, although those businesses that took up sales and marketing advice appeared to see longer-lasting effects. This gives us some clues on how to improve business advice: potentially, focus more on sales and marketing advice and on the structural elements of business practices and decision-making that are holding back sustained growth (rather than only solving businesses’ immediate problems). 

For the geeky, the CEP analysis also included separate estimates using non-RCT techniques (specifically, propensity score matching or PSM, which is often used when it isn’t possible to run an RCT). They found that this method results in a substantial upward bias in the estimate of the long-run effect of the scheme. Specifically, the PSM estimates suggest a persistent positive impact on turnover and employment in the second and third year after taking up the vouchers, whereas the RCT shows no long-term effect. Why do we point this out? With RCTs still so rare in this space, this analysis can help us figure out how far off the mark more traditional evaluations may be. 

The productivity problem – and the evidence problem

Analysis like that of the CEP is incredibly rare. WWLEG has undertaken reviews of studies on the impact of business advice schemes on growth. They searched through 700 evaluations and reviews from the UK and OECD countries, and found only 23 that met minimum standards for rigour, and of those, only six showed a positive effect on metrics related to productivity or employment. 

We need to get much better at filling those evidence gaps if we want to tackle the UK’s long-standing productivity problem. We need to go beyond identifying the problems and get into understanding how to design successful policies and programmes that help businesses grow. That won’t always mean an RCT: sometimes that may well be impractical. But we do need much more of the ‘test and learn’ approach that we saw in the Growth Vouchers programme. 

So what made it possible to run an RCT for Growth Vouchers, and what lessons does this hold for the future?

  1. The Growth Vouchers programme was planned and implemented with evaluation built in from the start. That meant it was possible to randomise businesses into control and treatment groups, for example. Unlike other (less robust) evaluation measures, RCTs are hard to implement as an afterthought. The program also embedded experimentation within the programme design itself.  For example, BIT worked with HMRC to trial a targeted letter campaign, testing several different letter versions. The best performing letter increased the number of businesses clicking through to apply by 50% compared to a control letter, and overall the trial added 9,000 extra sign-ups, more than any other marketing source – a massive effect size in its own right, that many marketeers would dream to achieve.
  2. The CEP analysis relied on both survey and administrative data (constructed from PAYE and VAT records). Having both survey and administrative data helps enormously. Survey data can provide useful contextual information but is often incomplete. Administrative data means researchers can estimate actual business turnover. 
  3. We need accountability in the system to encourage high quality evaluations. The CEP paper was published ten years after Growth Vouchers were rolled out. RCTs do take time, but they don’t need to take that long. During those ten years, many of the individuals that first worked on Growth Vouchers moved on to other places. The acknowledgements in the paper show just how many individuals and organisations (inside and outside government) came together to make sure the evaluation was followed through – in many ways a remarkable effort. But it’s too easy to see how it may not have happened. A key lever to improve how we spend public money is to strengthen our mechanisms to ensure that robust evaluations are built into policy. In this respect, the UK’s Evaluation Task Force (ETF) and Australia’s Centre for Evaluation (ACE) are institutional developments to celebrate. 

Conclusion: a little dirty secret and a much bigger one

So why wasn’t the launch quite what it seemed? The Minister, Matt Hancock, was very pleased with how it went. However, no one had told him that all the press, who were supposed to be at the launch, were uninvited by the No10 press team. It was therefore a room full of officials and supporters. (He was told later.)

Why? Because when the press office realised that the program was built around an RCT, they panicked. “What do you mean, we’re going to reject businesses who apply for the scheme at random? Are you kidding me? How are we going to explain that?”

The real dirty secret, of course, is not that No.10 uninvited the press to this launch. It’s that huge swathes of policy happens without robust evidence or evaluation, including much of growth policy. This isn’t unique to the UK. In fact, the UK is now one of the countries at the forefront of efforts to increase the volume of robust evaluation, not least due to the establishment of the Evaluation Task Force and the What Works Centres. But we still have a long way to go.

If such a scheme and RCT was launched today, it is to be hoped that No10 would not un-invite the press. The Growth Vouchers trial showed that it was possible to run an RCT without the world ending. Through the trial, we have learned a lot about how to successfully run a trial in the business policy space, including how to set up the right data linkages. That has taken us a valuable step forwards not just in understanding how to help businesses grow, but also how to help evidenced based policy grow too – and that’s just as important for the long term good of the country. 

Authors