In 2007, Ben Bernanke spoke about the ongoing difficulty for regulators in balancing ‘the benefits of financial innovation’ against ‘the risks that may accompany that innovation’.
It is obviously debatable if, pre-crisis, regulators got that balance right. It is undeniable, however, that complex trade-offs, of the kind described by the former Chair of the Federal Reserve, are a reality of economics and regulatory practice.
The vexatious issue, of course, has always been how those trade-offs are managed. In other words, how effective have policy makers been in ensuring that the ‘usefulness’ of their activity outweighs its cost?
Regulators and policy makers have long been alive to such dangers as regulatory arbitrage and unintended consequence, as well as the basic importance of impact assessments.
But this has neither eradicated the risk of policy interventions landing in unpredictable ways. Nor has it definitively solved the challenge of diagnosing what most needs to be ‘fixed’ to improve outcomes or identified the critical elements in the economic trade-offs involved.
This speaks to a simple truth. Market dynamics, particularly in financial markets, are extraordinarily complex. Poor customer outcomes often have multiple and diffuse causes, both ‘traditional’ and behavioural. Moreover, regulatory resources are by their nature limited. An equation that does not lend itself to the curation of uniformly perfect solutions.
How you overcome this challenge is therefore a vital question. And the most compelling answer we have to it may be emerging today from multiple sources of academic activity.
Those sources include high profile advances in behavioural economics, as well as leaps forward in data science methodologies – the ‘big data’ revolution.
But they also include a host of improvements in other areas of economic analysis that, whilst less-heralded, are profoundly important to policy makers. These include research into effective competition and how it’s affected by features of the market, including behavioural biases.
Taken as a whole, these developments offer policy makers a clear opportunity to better understand their markets and, ultimately, to design more effective interventions for them.
There has, however, been a long challenge over how best to knit insights from these diverse developments together. Reconciling them with existing regulatory economic approaches and combining them into a comprehensive framework for making decisions has been difficult.
Economics for Effective Regulation (EFER) has been designed as a specific response to these tasks, setting out a structure for assessing real world markets that uses those economic advances through the standard stages of regulatory analysis: from problem diagnosis to intervention design and impact assessment.
In this sense, EFER should be able to complement traditional approaches to Impact Assessment (IA) and Cost Benefit Analysis (CBA) in important ways.
IA and CBA, like old friends, have long been relied on by policy makers and obviously have value. However, it is notoriously difficult to use their measurements effectively. In fact, FCA analysis of 100 CBAs from around the world suggests that the quantification they achieve can be partial.
EFER’s chief potential therefore seems to lie in providing a more in-depth diagnosis of the issues causing poor outcomes. And, in doing so, in offering policymakers the opportunity to pick remedies that are definitively linked to the root causes of a problem, rather than its symptoms alone.
This suggests EFER offers support to policy makers in fulfilling statutory CBA obligations, and that the two are not in competition. In fact, EFER should allow economists to demonstrate qualitatively and with greater authority whether an intervention is, or is not, capable of addressing a problem proportionately. Moreover, it should help identify the most significant impacts on the market, making it possible to target quantification efforts more effectively (to the extent measurement is feasible and proportionate).