Prisoners, wellness programmes and the rats of Hanoi: why the FCA tests its interventions

Speech by Christopher Woolard, Executive Director of Strategy and Competition at the UK Competition Network conference.

christopher woolard.jpg

Speaker: Christopher Woolard, Executive Director of Strategy and Competition
Location: UKCN conference, London
Delivered on: 1st October 2018

Highlights:

  • The FCA has been at the forefront of recognising the value of testing interventions and was one of the first global regulators to set up a dedicated behavioural economics team.
  • Testing helps us design better, more effective regulation by showing what works, and what doesn’t.
  • The FCA has worked closely with practitioners and fellow regulators to share knowledge and design best practice.

Note: this is the speech as drafted and may differ from the delivered version.


I’m going to give you a series of real problems. Consider the solutions and decide if you think they’re going to work. Do employee wellness programmes, which typically offer health screenings and access to weight-loss programmes or gyms, make people healthier? Does sending children who are at risk of offending to hear from prisoners put them back on the straight and narrow? And did paying villagers a bounty for catching rats and producing their tails as proof reduce Hanoi’s rat problem? The answer? None of these programmes worked.

Early evidence appeared to show that employee wellness programmes increase health. But because these studies compared those who participated against those who chose not to, all of the differences observed were simply due to self-selection. In fact, wellness programmes don’t improve health. 

Sending children to visit prisoners and learn about life on the inside actually increased offending. This was demonstrated in a famous policy failure called Scared Straight. 

And the rat population of Hanoi? Enterprising citizens found that they could cut the tails off rats in exchange for money – and then release them back into the sewers. There they would produce offspring whose tails could also be chopped off for profit!

The consequences of our policies and actions are not always easy to predict. These stories show us that even well-intentioned, well-designed interventions can fail. And we may not know this in advance.

So the first thing that we must recognise is that regulators and policymakers are fallible. Lots of us are probably aware of behavioural biases, like the tendency to favour the present over the future. And we know that this means that consumers may make poorer choices. But we, too, are human. Whether we are trying to increase switching in the savings market or encourage consumers to compare mobile phone deals, policymakers must recognise the limitations of our ability to predict the future – in particular, to predict how firms and consumers might respond to our actions. Fortunately, there is a solution. From energy regulators to water companies, policymakers have learnt the benefits of testing interventions before implementing them. In addition to more formal consultation, testing reduces uncertainty about what will work and gives us insight about how markets might respond.

The FCA has been at the forefront of recognising this. We are one of the first global regulators to set up a dedicated behavioural economics team which investigates how consumers might respond to our interventions and rigorously tests different theories. We have tested demand-side remedies in the savings market, in general insurance, in current accounts and in payday lending to name just a few. 

our testing has shown us when interventions work, and more importantly, when they don’t

We have done this in partnership with firms, the Competition and Markets Authority (CMA) and others. And our testing has shown us when interventions work, and more importantly, when they don’t. 

For example, extensive testing of letters, texts and emails in the cash savings market showed us that the impact of disclosure remedies on switching is still small and that we need to explore stronger measures. We are now considering alternative remedies such as the introduction of a basic savings rate to help longstanding customers. This would be a variable interest rate set by each firm which would apply to all easy access cash savings accounts and easy access cash ISAs after they have been open for a set period of time. This is just one example of how testing can help us avoid the costs and wasted opportunity of interventions that sound good in theory, but have only limited effects in practice.

And testing doesn’t have to be hard. There are methods to suit every scenario. We could do a survey, which gives us a quick and dirty state of play on how consumers are feeling. We could carry out user design research to explore the barriers preventing good outcomes. We could set up an online experiment to test how consumers might respond to different policy iterations. Or, we could run randomised controlled trials in the field which give us reliable estimates of the real-world impact and magnitude of our remedies.

Whatever the method, there are three rules of thumb to follow when testing competition remedies:

  1. Firstly, use testing which combines diagnosis of problems and the design of remedies. This can mean using more than one method. To give one example, user design research could give us the small, but crucial details which determine whether a wellbeing programme recruits – and retains – people who would benefit. Combined with a field trial, which pits one or more interventions against the status quo, we are likely to have a better chance of finding the holy grail.

At the FCA, we did exactly this to tackle the problem of consumers unintentionally going overdrawn. Working with the CMA, we tested consumer prompts and alerts, such as text messages. We started by running user design research sessions to explore hypotheses for what makes consumers pay attention to messages. We used the results of this to decide to focus on alerts rather than prompts. We then tested text messages on real consumers in lab experiments and field trials. Our interdisciplinary approach allowed us to discover precisely which types of messages worked, and which didn’t.

This was crucial for designing the final policy. Without testing, in this case, we might have enacted policies which didn’t work. This would have created costs for industry as well as consumers.

  1. The second piece of advice is to choose your outcomes carefully. Unless we measure the precise outcomes we are aiming for, it is easy to see success where – in fact – there is none. We learnt this recently as part of our experimental research into credit cards. We know that many people are paying more in credit card debt service costs and taking longer to pay off debt then they need to. One in four people only pay the minimum contractual amount per month. We wanted to help consumers increase their payments and pay off their debt more quickly and more cheaply.

To do this, we worked with a firm to remove the minimum payment option when customers sign up for a direct debit. The early results showed great promise. One in five consumers moved away from a minimum payment direct debit and instead chose to have a fixed payment direct debit. Six months into the trial 7 percentage points more people were paying more than the minimum. We could have stopped here and congratulated ourselves on a job well done.

But our intention was to reduce overall debt. So, did this happen?

The answer is no. Although many people made higher automatic payments, this was offset by the lower manual payments that they would otherwise have made. And while our intervention caused more people to set up a higher direct debit, it also caused some people to drop out of setting up a direct debit at all. This combination of factors meant that overall debt stayed the same.

And this is something that we would never have discovered without choosing the right measures and monitoring them over a decent amount of time.

  1. My final rule of thumb is to pay attention to the distribution of outcomes. Policies can create winners and losers. This means that if we care about a particular subset of the population –  let’s say vulnerable people – we might want to check what happened to them, as well as what happened to the whole population. Sometimes we might even want to prioritise one group over another.

For example, this summer Ofgem published the results of its active choice, collective switching trial. This helped previously disengaged consumers to switch by giving them a straightforward method where they didn’t need to enter their existing tariff details. The trial found that over 20% of contacted customers switched energy provider during the 3-month trial, even though on average they had not switched at all in the previous 6.5 years. And it particularly helped vulnerable consumers. Twenty-four percent of switches were by participants over 75 years old and customers on the Priority Services Register were almost as likely to switch as others.

Finally, our remedies will be better if we collaborate. Regulators and practitioners are tackling many of the same problems. Often we’re aiming our remedies at the same people!  Some of our best remedy examples come from when we have worked with firms. And collaboration with bodies like the UK Competition Network has been invaluable in sharing knowledge and designing best practice, as the lessons learned paper published today shows. 

just because something sounds good in theory there’s no guarantee it will work in practice

So be it prisoners, wellness programmes or the rats of Hanoi, testing shows us that even the best-intentioned interventions can fail. Policymakers are fallible, and just because something sounds good in theory there’s no guarantee it will work in practice. That’s why we must keep testing the consequences of our interventions; to learn when they work, and more importantly, when they don’t.