Ex post Impact Evaluation Framework – Feedback Statement

This feedback statement summarises views received from our stakeholders after the publication of a discussion paper in April 2018, and our responses to them.

We published a Discussion Paper (DP18/3) on our Ex post Impact Evaluation (EPIE) Framework on 9 April 2018. Our consultation ran until 9 July 2018.

We asked 5 main questions and invited views on any other key issues.

This feedback statement (FS) provides an overview of the views received and the responses to our questions, our response and planned next steps.

We received responses from 4 trade representative bodies, a wealth management company group and our Practitioners Panel. During the consultation period, we also sought views on our framework through discussions with a few public body organisations.

Generally, respondents all welcomed, and were supportive of, our proposals for ex post impact evaluation of our interventions. They noted that evaluation is central to any effective decision-making framework, and recognised the importance of assessing the impact of regulatory interventions to improve outcomes for customers, now and in the future.

We have published the final framework document alongside this FS.

The table below summarises the views raised by our stakeholders on a few broad themes, and the 5 questions we posed in the discussion paper and our responses.

Key themes



Our response/next steps    

Other existing guidance

One respondent asked that we take note of and follow the guidance in HM Treasury’s Green Book and Magenta Book.


Central government guidance informed the framework. See, for example, paragraph 4.8 and annex 2 of our final framework.


Follow-up actions

Some respondents told us that it is unclear how the ex post impact evaluation findings will be used.

One respondent asked whether a regulation would be changed if it has not had the intended impact. Another respondent went further, stating that the regulator must be willing to cancel or radically change an intervention if there are serious doubts about it.

One respondent asked to see commitment to follow-up actions being taken after an ex post impact evaluation rather than being protective about reputation.

We state, in the framework, that if we find that the issues identified in a market are still occurring and our interventions have not had the intended effect, ‘we will consider our next steps and whether to take further action’.

In the recent GAP impact evaluation, published in July 2018, we were open about some of the impacts being less pronounced than we had anticipated (the impact on the share of standalone sales of total sales and on add-on prices).

We have amended the EPIE framework in Section 3 to reflect these comments (see paragraph 3.13).


Unintended consequences

A couple of respondents asked how we approach evaluating unintended consequences of our interventions. One of them told us that evaluation should seek to determine whether:

a) the intervention has achieved its goal

b) it has imposed unnecessary burdens on consumers or firms

c) it has resulted in secondary effects that are welfare-negative

They asked that we seek out and acknowledge any unintended consequences that cause harm to firms and especially to consumers, and address whether the intended effects of the intervention justify any side-effects. As far as possible, we should outline in advance how we will weigh the various strands when determining the net effect of an intervention.

As mentioned in Box 2 of the framework, we intend to use causal chains at the start of an intervention to outline its expected effects. See, for example, several recently published CBAs such as those for SME Access to FOS and our High Cost Credit Review CPs)

In the framework, we have been open about the difficulties in establishing causality of intended consequences. It is arguably more difficult to do so for unintended consequences as we cannot test the ‘unknown unknowns’. However, we believe that any large negative unintended impacts are likely to be identified in discussion with firms (such as those we had with our reducing barriers to entry evaluation) and/or in the consumer surveys that may be undertaken in our impact evaluations (such as GAP evaluation).   

This is one of the benefits of complementing quantitative analysis with qualitative methods, and, as we have indicated in the EPIE framework, we intend to adopt this mix in our impact evaluations.

Where there are some ‘known unknowns’ eg the possibility of waterbed effects, we intend for the evaluations to consider whether these have materialised (eg in the GAP evaluation we looked at whether there had been a waterbed effect on Complete Wheel Protection and could not find one).

Testing for unintended consequences widens the scope of an evaluation, and is likely to increase the burden on firms from data collection. Seeking out any unintended consequences needs to be balanced against the cost this imposes on firms. 

We have acknowledged in the framework that we will examine significant unintended consequences highlighted by quantitative or qualitative evidence (see paragraph 4.9 of the framework document).


Publication commitment

Respondents welcomed the statement that we ‘expect to publish our evaluations in most cases’ as it will improve transparency.

We have published 3 ex post impact evaluations this year (GAP,  bringing additional benchmarks into the regulatory and supervisory regime and reducing barriers to entry in banking).    

Difference between Post Implementation Review (PIR) and EPIE

One respondent told us that the discussion paper is not clear about the difference between PIRs, ex post impact evaluations and potential other methods of assessment. They asked for a more detailed list of the methods we use to assess interventions and how the various approaches differ.

We were also urged to be cautious and not overclaim causality in ex post impact evaluation studies.


As explained in the framework, ex post impact evaluations are a subset of PIRs (see Annex A of the framework document). Throughout the paper we cover what we mean with ex post impact evaluation and its emphasis on quantification.

There are several evaluation methods that can be used (see Annex B of the framework document for a non-exhaustive list). The Magenta Book (to which we cross-refer in the framework) provides much more detail on what existing methods are available. We are open to all robust methods and aim to use the most appropriate ones given the circumstances of the intervention.

We agree that we should be careful not to overclaim causality. For example, in our evaluation of bringing benchmarks into the regulatory regime we were clear that we could not always fully disentangle the effect of our regulation from market developments.  


Timing of an evaluation

A respondent suggested evaluating after a fixed period of time from the implementation of an intervention.

We cannot commit to do an impact evaluation at a single fixed period after the intervention. Interventions usually have very different lag effects depending on their nature. For example, a policy such as Senior Managers and Certification Regime (SM&CR) will take time to affect conduct and market outcomes, whereas a policy such as our 2017 intervention to increase transparency and engagement at renewal in general insurance markets should have an almost immediate effect.

In general, we may expect an evaluation to take place between 2-5 years after an intervention but this, as the examples show, may vary greatly.

Our approach of not committing to specific time periods is consistent with the approach of other regulators, such as the Competition and Markets Authority. However, we might, at the time of implementing specific policies, indicate when, if at all, we would expect to conduct an ex post impact evaluation.



One respondent observed that while they recognised the FCA’s commitment to focus on identifying market-wide (rather than firm specific) effects on outcomes, they also encourage the evaluation of our market-wide approach to supervision and our cross-firm supervisory work. These two should be separate from any assessment of supervisory activity which is conducted at firm-level.

On our intention to exclude interventions that ‘lack perceived learning potential’ and ‘potentially lack meaningful results’ one respondent believed that the evaluation of the impact of these initiatives could be beneficial as it could maximise learnings. They felt that it is hard to establish from the beginning what would have ‘learning potential’, or what criteria would inform such an assessment. So, it would be best to run the evaluation and then decide if it could feed into other lessons learnt.

One respondent suggested we look also at totality of interventions and cumulative impact.

As we explain in the framework (see for example paragraph 2.4), ex post impact evaluation is only one element of our overall evaluation effort, which includes evaluation of ongoing supervisory activities.

Standard impact evaluation methods work for discrete, one off, interventions such as rules. They are not suitable to measure the impact of everyday activities. Even so, we are looking at ways we could measure their impact. We may also be able to learn about our approach to supervision from one-off interventions. As mentioned in the framework, we will look to evaluate recommendations from Supervision’s Thematic Reviews where they may lend themselves to ex post impact evaluation.

We believe that our focus should be on EPIEs where they are likely to be able to measure impact and have learning potential. We need to be mindful of the best use of FCA resources, and by implication, firms’ time and resources. We agree that we should not be too conservative, however, when identifying interventions we might evaluate in the future.

On assessing the cumulative cost of regulation, we explain in Annex A of the framework that as well as ex post impact evaluation we undertake different types of research and evaluation studies. For example, we are working on a study that assesses the cumulative impact of regulations on small firms, to improve our understanding of how and why these costs arise, and what the major areas of concern are.


External involvement

A couple of respondents told us that this work would benefit from external oversight and that they would like to be involved with the development of this work, particularly at the selection stage.

We welcome external oversight. This is why we expect to publish our evaluations in most cases (see paragraph 2.11 of the framework document), and why we engage with key external stakeholders, including with external academics to provide advice at the early stages and then review our impact evaluation work.

We are open to NAO examinations of our performance measurement approach, including our approach to ex post impact evaluation.

We also welcome suggestions over what interventions to evaluate in the future through the regular communication channels we have with industry (eg responses to consultation papers, roundtables and suggestions via our panels). We will consider these suggestions in line with the selection criteria we have set out in the framework.


What constitutes success of an intervention

A couple of respondents pointed out that care is needed in establishing what we consider ‘a good outcome against which any measurement is made’. We should ensure that we do not move the ‘goal posts’ on what good looks like with the benefit of hindsight.

We received much support over our intention to build evaluation into policy development. Doing this would ensure that more interventions can be evaluated. 

We were also told that ongoing monitoring is key, as well as EPIEs. If monitoring of key measures reflects the theory/our initial expectations, then that may often be sufficient evidence that our interventions achieve what was intended.

The use of causal chains upfront in CBA (as we have done for example in SME Access to FOS and High Cost Credit Review) reduces the risk of moving the goalpost in impact evaluations. Moreover, the CBA already sets out the expected quantitative impact of an intervention, so the goalposts are well established once the CBA is published.


We agree on the points made over the importance of monitoring and we have amended the framework to better reflect this at paragraph 5.6 of the framework. A good example of ways in which we are trying to improve monitoring is in the proposed next steps following the Strategic Review of Retail Banking Business Models.



Q1) Do you agree with the initial focus on assessing the impact on markets of rules, market studies remedies and market-wide recommendations from thematic reviews?

Key points/range of views

Respondents to this question agreed with our initial focus.

Our response/next steps

See response to ‘Scope’ above.

Q2) Do you agree with our commitment to undertake at least 1 ex post impact evaluation per year?

Key points/range of views

A few respondents were surprised we were not more ambitious and considered our commitment insufficient.

They were concerned that because so few are planned, we may not be able to draw meaningful conclusions that are sufficiently substantiated to justify changes in the regulatory approach in the future.

Our response/next steps

As we explain in the framework, ex post impact evaluations are only a subset of our wider evaluation activity.

As mentioned in the framework, we hope to increase the number of our evaluations each year. We have already published 3 impact evaluations in 2018.

We believe that a higher regular commitment is not yet possible. There are few past interventions that meet the selection criteria we set out in the framework. We hope that the number of interventions we can evaluate the impact of will increase, as we embed the baseline needed for evaluation in policy development and implementation.  

We have undertaken 3 impact evaluation pilots so far and already useful lessons have emerged that will feed into future regulatory interventions. For example, on the extent of the effect of a ban at point of sale, see GAP.

As we explain in the framework (paragraph 2.12), over time we expect ex post impact evaluations will contribute to a body of ‘what works’ evidence across regulation and public policy.

Q3) Do you agree with the criteria we intend to use to select which interventions to evaluate ex post?

Key points/range of views

Respondents agreed with the criteria we use to select those interventions suitable for ex post impact evaluation.

One respondent agreed that the selection criteria should consider the regulatory cost of intervention.

Another respondent asked whether it was possible that such evaluations could lead to enforcement investigations. They expressed the view that firms should be made aware of the possibility and link to this activity if there is one.

Another respondent asked if the industry could suggest topics for evaluations and whether the issues to be evaluated will be consulted upon. This is because there are recent examples of regulation which their members do not believe will deliver the intended value or benefit to consumers. They would welcome the opportunity to suggest these as potential areas for evaluation.

Our response/next steps

The focus of ex post impact evaluations is on market wide outcomes rather than on specific firms. However, if an impact evaluation highlights that outcomes are affected by lack of compliance in the sector we will need to consider an appropriate response.

We have no plans to consult on potential evaluations. We believe that the selection criteria should make it clear which new interventions are likely to be good candidates for impact evaluation. We may not always be able to carry out an intended impact evaluation of interventions. Intervening events (for example actions by other agencies) can change market conditions. This can mean it is no longer possible to isolate the impact of a specific intervention. We have amended the framework to explain this.

As outlined under ‘External Involvement’ above, industry and consumer groups are welcome to suggest topics for evaluations that FCA could undertake through the regular channels of communication (such as responses to future consultation, roundtables or Practitioner and Consumer Panels).

Q4) Do you agree with how we intend to ensure the independence of our ex post impact evaluations?

Key points/range of views

One respondent expressed broad agreement with how we ensure independence and welcomed external input.

Another suggested an external audit by the NAO may increase impartiality and the perception of objectivity.

One respondent asked us to note that individuals and organisations that are under contract – especially those expecting or hoping for repeat business – also have incentives to mitigate their findings.

We were also told that we could set up an academic panel of experts in evaluation, that we draw from when seeking peer reviews.

Our response/next steps

We are pleased that stakeholders are broadly in agreement with how we ensure independence. We will continue to look for independent academic advice and peer review of our impact evaluations.

We will also consider new ways to ensure independence (eg double peer review, academic panel, commissioning work externally etc). We have strengthened this point in the framework at paragraph 5.15.

We are mindful that even if we contract our evaluations out we need to ensure reviews are honest and frank. Input from academics, as well as reviews by the National Audit Office, will help us guarantee the quality and independence of external scrutiny.

We are not planning to form a fixed pool or panel of academics, though we can set up call off contracts with academics with expertise in impact evaluation. We look to call on specific academic expertise that is relevant to the sector or type of interventions we evaluate the impact of, and have done so for the Evaluation Papers published so far.

Q5) Are there any other challenges we should consider from undertaking ex post impact evaluations?

Key points/range of views

One respondent pointed out that we should be wary of relying too heavily on evidence from small sample sizes. They urged us to ensure we only draw conclusions where the evidence is solid and robust.

Another respondent pointed out that as many firms operate in similar markets but with different business models, we should be cautious in providing ‘assumptive’ benchmarks when defining relative performance.

Our response/next steps

We agree about drawing conclusions where the evidence is solid. We will be transparent about the sources of our evidence and be clear about the sample sizes we use. We aim to ensure they are sufficiently large relative to the population.

We are mindful that we cannot easily generalise and that evaluation findings are often context specific. They may however provide a useful starting point in assessment, especially once we have built a substantial body of ex post impact evaluation evidence.