Our Rule Review Framework

Corporate documents Published: 14/07/2023 Last updated: 18/09/2023

We aim to make financial markets work well so that consumers get a fair deal. We regulate the conduct of 50,000 firms in the UK to ensure that our financial markets are honest, competitive and fair.  

One of the ways we do this is by making rules that set requirements for firms and individuals who carry out regulated financial services activities.  

When we intervene in markets, including by making rules, we use our strategic decision-making framework to ensure that we make effective regulatory decisions and that our powers are used consistently, transparently and proportionately. We use this to identify and diagnose harm, to design and implement an appropriate remedy and to review and evaluate our interventions.  

We have developed this draft Rule Review Framework (the Framework) to explain how we plan to monitor and review how our rules are working in practice. This is in line with an obligation introduced by the Financial Services and Markets Act 2023. This Framework applies to all our rules, which are found in our Handbook. In practice very few of our rules operate in isolation, so references to ‘a rule’ in this Framework may also include a set of rules, as appropriate. 

We are publishing a draft of this Framework to invite comments and input from all stakeholders before it is finalised. 

Executive summary

The draft Framework sets out our intention that, for many rules implemented after it is in place, we will collect and monitor data to assess the effects of our rule change. This is an important way for us to build our understanding, and evidence, of how our rules are working. We may choose not to actively monitor new rules where it is not feasible to do so, where it is not an effective use of our resources or where the rules relate to minor policy or rule changes with minimal impact.  

We update and amend our rules where necessary, but for many existing rules already in our Handbook, we may not have been systematically monitoring their outcomes. So our draft Framework sets out how we may have to take a different approach in these cases. We may choose to review these rules where we have evidence that the rule is not working as intended, or where we want to understand the overall effect (positive or negative) that the rule has on the market, while also considering if there have been substantive changes in circumstances which affect how the rule is working. 

If the data suggests that there may be a problem with how the rule is working, we will consider a range of actions that we may take to address this, including undertaking a review. 

Stakeholder feedback also plays an important role in helping us to understand how well our rules are working. Feedback may come from the firms and individuals we regulate, from users of their services and consumers more widely or from trade associations or other representative bodies. It may inform our monitoring, contribute to a decision to conduct a review or provide evidence as part of a review.

The draft Framework sets out the 3 main types of review that we may undertake, their purpose and when we may use them. We will decide whether to do a review based on our plans, existing commitments and resources.

The 3 types of review are: 

The draft Framework also sets out what actions we may take where a review shows that there are barriers to the rules working as intended.  These include considering whether: 

  • We can improve understanding of the existing rule, for example, through additional guidance. 
  • A change to the rule is needed (including varying or revoking the rule). We would follow our policy development process to achieve this. 
  • A further, more detailed, review would be helpful. This could be undertaken immediately, scheduled for a future date or depend on the outcome of further monitoring. This may be available where our first review was an evidence assessment.   

As well as providing valuable insight into how well a rule is working, the findings of a review can inform our wider work and approach. We may gain insight into how we can improve other interventions, such as supervisory work, and inform our approach to future rulemaking. 

The sections of this Framework set out: 

    We invite your views on our Rule Review Framework

    Thank you for your feedback on the draft Framework. Our survey has now closed. We are now analysing the submissions and will publish our response shortly.

    If you have any questions in the interim please contact us by email at: [email protected] 

    Our policy-making cycle and why we review rules

    Our rule review work is one part of our policy-making cycle. This cycle starts with us horizon scanning and clearly defining an issue. We identify instances of harm in financial services markets, for example where markets are working poorly and not providing sufficient benefit to users. We gather information from a range of activities to identify potential harm. These include day-to-day supervisory contact with firms, calls we receive from consumers, analysing market intelligence and our research as well as our ongoing engagement with stakeholders and their representatives. We do not act to remedy every potential harm. We consider the potential costs and benefits involved and our objectives

    Once we have identified potential harm, we aim to diagnose its cause, extent and potential development. To do this, we must decide whether we already have enough information to assess the issue properly or if we need to carry out specific work to better understand it. We have a range of diagnostic tools. These tools include market studies as well as policy work such as calls for inputs and discussion papers. We can also analyse individual firms or several firms simultaneously, for example through our multi-firm work and thematic reviews. We aim to do this in a way that is cost-effective for us and for firms.

    When we understand the potential harms, we then consider whether they can be resolved entirely or only mitigated. To do this we assess the range of our available regulatory tools and make a judgement about whether these tools can remedy or mitigate the harm cost-effectively. Rulemaking is just one of our possible remedies. Other remedies include publishing guidance or other communications to firms or their customers or encouraging industry to act voluntarily to address a problem. We can vary or remove firms’ permissions to carry out certain activities. We can use our authorisations approach to control firms’ and individuals’ entry into the market. We also have our enforcement remedies where we have found breaches of our rules. 

    Reviewing the effectiveness of our remedies helps us to make better decisions, and to add more public value in the future. There is always the risk that regulatory interventions, including rulemaking, have unintended effects or don’t work as well as we anticipated. External factors and changes in the market may affect how well a remedy, like a rule, is working. So we need to understand whether our remedies are working as we intended and to measure their impact where possible. This also helps us to understand if our intervention has been proportionate to the outcomes achieved. If we find that the problems we identified in a market are still occurring and our remedies have not had the intended effect, or had a negative effect, we consider our next steps and whether to take further action. 

    This Framework focuses on our approach to monitoring and reviewing our rules. This is in line with the Financial Services and Markets Act 2000 (FSMA) (as amended by the Financial Services and Markets Act 2023) which requires us to keep our rules under review generally. FSMA also requires us to publish a statement of policy on our review of rules. 

    A clear policy on our approach to monitoring and reviewing our rules will help us to think carefully about the effectiveness of our rules throughout our policy-making cycle. When we look at whether to review rules, we look at whether the rule is working as intended to address the harm we identified, whether it is delivering wider unanticipated benefits or whether it is leading to other unintended consequences.  These will shape whether and when we may review a rule. 

    What's in scope

    This Framework applies to all our rules, which are found in our Handbook. The duty in the Financial Services and Markets Act 2023 to keep our rules under review does not apply to any materials that are not rules. However, when we review a rule, we can choose to review our guidance, any waivers or modifications and related materials to see if these are working well. 

    We can also use monitoring and the types of review in this Framework to assess the effectiveness of other types of interventions that we make. 

    Our Handbook is detailed and many of the rules inter-relate. We will need to keep this in mind to ensure that we do not review rules in isolation as this may cause problems if we decide, after review, that a rule is not working as intended and needs to be changed. 

    Areas shared with other regulators

    Some of our rules are in areas of responsibility shared with the Prudential Regulation Authority (PRA) and the Payment Systems Regulator (PSR). Our rules may also be determined or influenced by the same legislation or codes affecting other regulators, for example the Companies Act 2006 and the Corporate Governance Code. 

    These regulators have their own review processes and may also be directed by the Treasury to review specific rules or, in the case of the PSR, directions.

    Where we propose to review a rule that is in an area shared with another regulator, or where we are directed by the Treasury to do so, we will work with that regulator to agree the best way to carry out the review. We have formal memoranda of understanding with the PRA and the PSR which set out the framework for how we co-ordinate and co-operate in these kinds of scenarios. 

    We also work together with other regulators as members of the Financial Services Regulatory Initiatives Forum to produce the Regulatory Initiatives Grid. This sets out the regulatory pipeline, including initiatives that the regulators are working on together. This helps us to understand each other’s plans, including for upcoming reviews and, where appropriate, to coordinate. 

    Consideration of international standards and regulations

    Many of the markets we regulate operate internationally. We actively contribute to the development and implementation of international standards in global standard setting bodies. These support the management of cross-border risks to financial stability, market integrity and general confidence in the global financial system. They also support the development of common approaches with counterparts. We supervise and enforce rules on the basis of these standards in the UK. Where our rules flow from international standards, we are in general unlikely to depart from these without a compelling justification in line with our objectives. We would need to carefully consider the effect that changing these rules may have if this were to be the outcome of a review.

    Summary of our approach to reviewing new and existing rules

    How our proposals meet our objectives

    Our proposals advance the FCA’s strategic objective to make markets function well by ensuring our rules are having the intended effect on the market and are addressing the harm we wanted to target when we made them. 

    To make general rules we must be satisfied that they are necessary or expedient to advance one or more of our operational objectives to protect consumers, protect and enhance the integrity of the UK financial system and to promote effective competition in the interests of consumers. The Framework applies to all our rules, regardless of the operational objective they advance. It sets out how we will consider whether our rules are meeting their intended outcomes which will include considering whether they’re still advancing one or more of these objectives. 

    By ensuring the rules in our Handbook are working as intended and undertaking reviews in response to evidence that this is not the case, our proposals are also aligned with our secondary objective to facilitate the international competitiveness of the UK economy and its growth in the medium to long term.  

    We will ensure the approach to our rules supports our commitment to proportionate regulation.

    Figure 1: How the elements of the Framework fit together

    Figure 1 shows a range of ways in which we can assess the effectiveness of an intervention, and how these relate to our activities, the outputs from these activities and the outcomes we seek to achieve.

    Where we monitor or review rules, we are interested in the policy introduced by the ‘package’ of rules and therefore we may not monitor or review every individual rule within that package.

    The rule review process can be broken down into broad stages:

    New rules

    For many rules implemented after the Framework is in place (a new rule), we will collect data and monitor this in a systematic way. We may choose not to actively monitor new rules where it is not feasible to do so, where collecting data and intelligence would be disproportionate for the FCA or stakeholders, or where it is not otherwise an effective use of our resources. We may also decide not to monitor where the rules relate to minor policy or rule changes with minimal impact.

    • Where we do plan to monitor, we will set our intended outcomes when we develop the policy and rules. We may develop a causal chain setting out how we expect these outcomes will be achieved. We will then decide the key metrics we will monitor.
    • Once the rule is in force, we will start collecting and monitoring data on how well it is meeting, or is on track to meet, its intended outcomes. We will also consider relevant evidence from stakeholders. The appropriate timeframe for monitoring will depend on the specific circumstances for the rule. 
    • If the data suggests that there may be a problem with how the rule is working, we will consider whether this can be fixed easily or whether to do an evidence assessment. An evidence assessment aims to synthesise any data which indicates whether the key intended outcomes of a rule or policy intervention are being, or are on course to be, met. We decide whether or not to do an evidence assessment based on our plans, existing commitments and resources. 
    • The possible outcomes of the evidence assessment include starting the process to consider varying or revoking the rule, where the evidence suggests this is necessary and appropriate. Alternatively, we could choose to return to monitoring the rule, issue clarification or guidance on how the rule operates to improve stakeholders’ understanding, or plan a more in-depth review. 

    We will also continue to maintain our planned pipeline of impact evaluations. To ensure that we have the right data for this kind of analysis, this is best planned at the stage we are developing the policy and rules. This contrasts with our wider approach of monitoring data or the market to decide whether it suggests a review is needed.

    Figure 2: Overview of rule review process for new rules

    Figure 2: Overview of rule review process for new rules

    Overview of rule review process for new rules (PDF)

    Existing rules or other rules which we are not actively monitoring

    We may not have been systematically monitoring data for existing rules already in our Handbook, or we may have implemented new rules and chosen not to monitor them. In these cases, we may have to take a different approach to when we do reviews.

    Due to the significant number of rules already in the Handbook, we may consider these rules for review where we:

    • have evidence that the rule is not working as intended
    • want to understand the overall effect (positive or negative) that the rule has on the market, while also considering if there have been substantive changes in circumstances, including market developments or the introduction of other rules, which affect how the rule is functioning

    In some cases, we may have been collecting data relating to the rule, for example through our supervisory work, which we can use to assess how it is working.  In other cases, stakeholders might actively give us evidence about how the rule is working which we will consider carefully alongside all other available evidence.

    We take decisions to review rules based on our plans, existing commitments and resources. We will also take account of the adequacy of available data and the likelihood of improving any gaps in a proportionate way.

    Once we have decided to do a review, the stages will broadly follow the same process as for new rules from evidence assessment onwards.

    The Treasury can also require us to review a rule (see Government-directed reviews).

    Figure 3: Overview of rule review process for existing rules

    Figure 3: Overview of rule review process for existing rules

    Figure 3: Overview of rule review process for existing rules (PDF)

    How we set, measure and monitor the outcomes of our rules

    Setting outcomes

    We set the outcomes we are seeking to achieve for new rules and policies so that we can assess whether they are working as intended once they have been implemented.   

    Our statutory objectives shape and guide our policy development and rule proposals. We generally explain the outcomes we want to achieve by reference to how they advance one or more of our statutory objectives.   

    Monitoring outcomes

    Monitoring is an important way for us to build our understanding, and evidence, of how our rules are working. We will monitor new rules and policy unless they relate to minor policy or rule changes with minimal impact, for example administrative changes or less significant policy interventions like those we consult on via our quarterly consultation papers. In these cases, we may decide not to monitor. We may also choose not to actively monitor new rules where it is not feasible to do so, where collecting data and intelligence would be disproportionate for us or stakeholders or where it is not otherwise an effective use of our resources. 

    When we monitor, we will typically monitor the ‘package’ of rules that make up a policy, as opposed to monitoring each individual rule. 

    For ultimate outcomes that take a long time to materialise, we may monitor intermediate outcomes as leading indicators of progress towards our ultimate outcomes.  

    Where we identify potential unintended consequences of the rule, we may also plan to monitor these. For example, if firms increase the prices of ancillary products to offset the costs of implementing a product-specific fee cap.  

    The difference between outputs and outcomes

    In monitoring our rules, we need to differentiate between outputs and outcomes. Outputs relate to our activities. Outcomes are what is achieved because of the outputs. The examples in Table 1 illustrate the difference between outputs and outcomes. 

    Table 1: Examples of outputs and outcomes

    Harm Intervention Output Intended outcomes
    Rent to own customers are often vulnerable and pay prices that are too high Rent to own price cap Publishing a consultation paper and policy statement to bring the price cap into force  Rent to own customers receive fair prices and the cost of borrowing is reduced
    Market stability is threatened by misconduct and manipulation cases relating to financial benchmarks  Bringing 7 benchmarks into the regulatory and supervisory regime  Publishing a consultation paper and policy statement setting out the FCA’s framework for regulating and supervising the additional 7 benchmarks being brought into regulatory scope  Benchmarks’ robustness and representativeness increase, leading to improved liquidity and participation in the underlying markets 

    Our organisational outcomes and metrics

    More broadly, we monitor progress against our key areas of focus, as set out in our published Strategy and Business Plan. We report on our performance against our Business Plan in our Annual Report. 

    In many cases, the outcomes and metrics for our new rules will align with our organisational strategic outcomes and metrics set out as part of our strategy.  We may therefore also use those metrics to assess whether there may be an issue with our rules. 

    Using causal chains to assess how our interventions will work

    We may use a causal chain to explain how we believe an intervention, like new rules, will work by setting out the steps between the outputs and the outcomes we want to achieve. A causal chain may be set out in the CBA, which is the document in which we assess the costs and benefits of our proposals.  

    Causal chains will inevitably include some assumptions, for example about changes in behaviour that may result from our intervention. However, they are a useful tool to help us understand how we will bring about the change we want to see. 

    Identifying key metrics to monitor

    We can monitor outcomes, including lead indicators or intermediate outcomes, to help us assess whether the assumptions we made in the causal chain are correct. To do this, we identify a set of metrics relating to each outcome to help us establish whether the rule is working as intended. We need to examine and frame these metrics carefully to avoid mis-interpreting movements and to account for predictable variations that recur each calendar year (‘seasonality’). For some rules, we may find it useful to measure outputs as well as or instead of outcomes (see the difference between outputs and outcomes). 

    Data sources for key metrics

    Evidence for our metrics can come from different sources of data. We collect a lot of data and have a variety of sources to help us understand how our rules are working, including our authorisation, supervision and enforcement work.    

    The following are examples of possible sources of data for our metrics: 

    1. Stakeholder feedback gathered through, for example, roundtables, focus groups and other forums as well as evidence proactively provided by stakeholders to us about how the rules are working. 
    2. Our ongoing engagement with stakeholders’ representatives, for example through trade bodies, industry led bodies and industry associations, consumer organisations and civil society. 
    3. Research, such as surveys like the FCA’s Financial Lives Survey and Practitioner Panel Survey and insights from behavioural analysis, including into consumer experience. 
    4. Feedback from our statutory Panels and advisory committees. 
    5. Our supervisory work, including ongoing contact with firms and our multi-firm reviews. 
    6. Information provided by firms through regulatory returns and reporting. 
    7. Complaints data and data from the Financial Ombudsman Service and other regulators.  
    8. Information from our enforcement work and primary and secondary market oversight work. 
    9. Market intelligence, including from wider monitoring of the market, our market studies and our thematic reviews. 
    10. Parliamentary feedback including MPs’ letters, Parliamentary debates, and Select Committee inquiries. 
    11. Third party data, such as the media, market research firms and financial data providers. 
    12. How similar measures, and reviews of these, have been implemented by other bodies such as authorities in other jurisdictions. 

    These data sources are also relevant when we decide to do a review and want to explore in more detail how the rule is working. At that stage, we may also need to request further information from firms or to do further consumer research. 

    How we monitor rules

    As part of the CBA, or earlier analysis before implementing rules, we typically establish a baseline showing the state of the market before our intervention and we also consider how it would have changed over time without our intervention (counterfactual). Ideally, we will collect data for the baseline for several time periods (months/years) before implementation to understand underlying trends or seasonality in the data. However, often this may be unfeasible or disproportionate.  

    Once the rule has been implemented, if we plan to monitor the policy, we will collect or monitor data to assess progress. Ideally we will collect and monitor at regular intervals and compare this ongoing data against the baseline or counterfactual. The appropriate frequency will depend on the circumstances of the rule and availability of meaningful data. For example, we may collect some data monthly, and some annually. We will consider proportionality, weighing the value of the data against any cost for firms to provide the data and the costs of collection. 

    If there are signs that the rule is not working as we intended, we may decide to conduct a review of the rule (see Types of review and how and when we undertake them). 

    If the data and information we monitor indicates that we are on track to achieve the intended outcomes of the intervention, we will continue to undertake further monitoring.  The length of time over which we monitor will depend on the relevant rule. Once we are content that a rule is working as intended, we can treat it as an existing rule and, where necessary, respond to evidence that it may need to be reviewed. 

    How we use stakeholder feedback

    We welcome stakeholder feedback on how well our rules are working.  Feedback may come from the firms and individuals we regulate, or from users of their services and consumers more widely. In the context of rule reviews, this feedback plays an important role in 3 ways: 

    • it will inform our monitoring of how well rules are working 
    • it may contribute to a decision to conduct a review of a rule by providing evidence that a rule is not working as intended 
    • if we review a rule, stakeholder feedback can provide useful evidence as part of the review 

    Whether we seek stakeholder feedback as evidence for monitoring a rule will depend on the circumstances of the particular policy and our monitoring plan. Similarly, whether we seek stakeholder feedback as part of a review will depend on our plan for that review.  

    In relation to contributing to a decision to review, stakeholders can feedback to us on whether a rule is working as intended through several channels including: 

    • our ongoing engagement with stakeholders’ representatives such as trade bodies, consumer groups (including our Consumer Network), and civil society 
    • opportunities such as roundtables, focus groups, sprints and surveys focused on specific topics 
    • our ongoing supervisory work with firms  

    Our statutory panels also play an important role in giving us feedback about how our rules are working in practice. As part of our ongoing engagement with our Panels, we will share our review plans and priorities with them and seek their views. We will also seek the Panels’ input into individual reviews where appropriate. Our relationship with the Panels is such that they regularly raise issues with us, as well as providing feedback and input.  

    For new rules, we will ensure we consider the evidence provided by stakeholders as part of any ongoing monitoring. For existing rules or other rules that we may not be actively monitoring, we will ensure that we consider stakeholder evidence when deciding whether to review these rules.  

    Channels for stakeholders to feedback on specific rules

    We are committed to ensuring there are clear and appropriate channels for our stakeholders to raise concerns about specific rules, as part of the Framework. We have set out at How we consider stakeholder feedback different ways that stakeholders can already feedback to us about our rules. We are also looking at ways in which stakeholders could more easily share evidence that a rule is not working as intended. We are considering: 

    • a feedback option embedded in our online Handbook, so stakeholders can easily feedback on specific rules 
    • a feedback option on our website for stakeholders to provide feedback on any rule

    The importance of evidence

    Reviewing rules, and changing rules because of our reviews, takes time and resource. It can also increase uncertainty for firms and individuals applying our rules. It is important that stakeholders providing feedback set out, with supporting evidence: 

    • what is not working  
    • the effect of the rule not working as intended 

    This will help us to understand what is happening and, where appropriate, to prioritise.  

    We will consider any feedback received and, where we agree there may be a need for a review, we will build this into our organisational planning and prioritisation processes. This will need to consider work we have already planned such as the repeal and replacement of retained EU law, which is a significant exercise involving the review of around 35 policies for which the FCA has responsibility, as set out in Our immediate priorities for review below.  

    We cannot commit to undertaking a rule review in response to every piece of feedback.

    Types of review and how and when we undertake them

    Overall, we have 3 types of review: 

    • evidence assessment 
    • post implementation review  
    • impact evaluation 

    We also have other options available to investigate specific types of problems suggested by the data we have collected. We may undertake: 

    • a market study where we have concerns that competition in the market may not be working well  
    • a thematic review or multi-firm work where we have concerns about firms’ compliance with rules 

    At times, we may be able to understand what is happening without a review, simply by further interrogating the available data or supplementing it with additional data. However, in cases where there are still concerns about the effectiveness of the rule, we may need to carry out a review. Table 2 sets out the types of review we may undertake.  

    Table 2: Types of review 

    Type of review Why we do it How we do it When we do it
    Evidence assessment 

    To answer the following questions: 

    • Has the intervention achieved its intended outcomes, or is it on course to do so? 
    • Have there been market developments or changes to the wider regulatory landscape which may have impacted the rule’s effectiveness? 

    We aim to collate and analyse evidence which indicates whether the intended outcomes of a rule or policy intervention are being, or are on course to be, met. This may involve a focus on the early or intermediate part of the causal chain of an intervention. This helps us understand effects without providing precise quantitative estimates. The focus is to assess if the key changes expected have happened, and the reasons for these changes, without necessarily isolating the exact causal effect of the intervention.   

    We can also use an evidence assessment for existing or other rules where we did not establish a causal chain and there are no clear outcomes to measure against. Here, our aim is to understand the overall effect (positive or negative) that the rule has on the market and whether there have been significant changes in circumstances that affect how the rule works, such as market developments and the introduction of other rules.  

    As far as possible, an evidence assessment will make use of existing data and information held within the FCA.

    If monitoring, stakeholder input or other evidence indicates that a rule is not working as intended, or there has been a change in circumstance which has a significant impact on the efficacy of the rule or the context in which it is applied.
    Post implementation review

    To answer the following questions: 

    • Has the intervention achieved its objectives?  
    • Have there been unintended effects? 
    • Has the intervention been implemented consistently? 
    • What were the obstacles to implementation? 
    • Can the intervention be improved? 

    We aim to establish whether a rule or policy intervention has met its intended outcomes while also identifying implementation issues and potential unintended consequences, assessing compliance with the rule and examining the wider state of the market after an intervention. It does not typically set out to establish causality or examine what would have happened if we had not intervened.  

    Our focus will primarily be on assessing the outcomes from the early or intermediate part of the causal chain of an intervention, through significant engagement with internal and external stakeholders. We use the results of a post implementation review to understand if an intervention has worked as expected, to make improvements or change approach. 

    Where we have evidence that a rule is not working as intended and anticipate that significant data analysis and stakeholder engagement will be required to understand which areas of a rule delivery and implementation worked well, and which areas did not.  

    We may also plan in advance to carry out a post implementation review to assess whether the rule implementation has been successful. In these cases we would not seek to establish a causal link between the rule and the outcomes. 

    Impact evaluation

    To answer the following questions: 

    • Has the intervention achieved its objectives? 
    • What is the causal effect of our intervention on the measured outcomes? 
    • Did we estimate costs and benefits (including how they are distributed) correctly? 

    Our primary purpose is to attempt to isolate and quantify the impact of our interventions and more reliably attribute it to our actions. This evaluation attempts to establish the counterfactual (what would occur in the absence of an intervention) and measures the impact of interventions on outcomes in a way that controls for the effects of material changes in the business environment.  

    It can include qualitative discussions with stakeholders to help understand why results are as they are. When other more empirically robust methods cannot be used, it can also include studies which show that both intermediate and final outcomes have changed in the direction we expected along an agreed causal pathway (from the implementation of the policy to the realisation of benefits) (see Box 1 on theory-based evaluation). In those cases, data analysis, coupled with views from stakeholders, helps establish a counterfactual. 

    Typically planned  in advance, at the policy development stage, to ensure we collect the correct data.  

    We use a set of criteria to determine which rules are suitable candidates for an impact evaluation. This includes the ability to identify a counterfactual against which causal impacts can be measured.

    Box 1: Theory-based evaluation 

    Theory-based evaluations are a form of analysis focused on understanding changes that have occurred with close reference to the causal chain. Depending on the depth of the analysis, this approach can be useful in evidence assessments as well as in impact evaluations.  

    They typically do not provide precise estimates of effects or have a causal interpretation (they do not show that intervention X caused Y to happen). However, they can provide suggestive evidence of how the impacts may have occurred. They can demonstrate that the intended outcomes of the relevant rule are being achieved and show the mechanisms through which this may have happened.

    Theory-based approaches to evaluation use a theory of change to draw conclusions about whether and how an intervention contributed to observed results. Generally, a theory of change includes: 

    • a causal chain  
    • the assumptions, risks and, in some cases, the mechanisms associated with each link in the causal chain  
    • the external factors that may influence the expected results 
    • any empirical evidence supporting the assumptions, risks and external factors 

    Under an evaluation method known as contribution analysis, if a theory of change can be substantiated with empirical evidence and any major external influencing factors can be accounted for, then it is reasonable to conclude that the intervention has made a difference to the observed outcomes. 

    Evidence assessments

    Indicative criteria for when we will do an evidence assessment 

    • The metrics we monitor show that intermediate outcomes are consistently not being met or we are at risk of not achieving our ultimate outcomes. 
    • Significant new evidence suggests we should look at the rule, including evidence from our supervisory or enforcement work, or from stakeholders. 
    • Significant changes in the market or the wider context may have affected the way the rule is working.

     

    An evidence assessment is a process of collecting and analysing available intelligence and information, which allows us to provide an assessment of whether the rule: 

    • is on track to achieve its original outcomes or objectives, for example as set out in the causal chain at the policy implementation stage 
    • has resulted in any implementation issues or unintended consequences 
    • can be improved, or may be improved with more evidence, to better meet our intended outcomes  

    We may consider doing an evidence assessment where monitoring suggests that the rule is not working as intended and we need to better understand the underlying reasons for this. Our focus is to assess if the key changes expected in the policy’s causal chain have occurred, and the reasons for these changes, without necessarily isolating the exact causal effect of the intervention. It focuses on the early or intermediate part of the causal chain, to understand effects without providing precise quantitative estimates.  

    For existing rules where we did not set out a causal chain at implementation, it may be difficult for us to establish outcomes against which we can measure whether the rule is working well. In these cases, we will set out to understand the overall effect (positive or negative) that the rule has on the market, while also considering whether there have been substantive changes in circumstances, such as market developments or the introduction of other rules, which alter how the rule is functioning.  

    An evidence assessment is designed to be informative and credible, while being less resource- intensive. It is also flexible, allowing us to design a review that is appropriate for the relevant rule and what the data shows may need to be addressed. 

    In most cases, we will use existing intelligence we already hold, particularly data that has been collected through any monitoring or our supervisory work. We might supplement this with qualitative data, such as evidence provided by stakeholders or their representatives including trade associations, consumer groups and our statutory Panels. This enables us to take a quicker, more agile approach to assessing a rule or policy.  

    We will undertake evidence assessments internally, building on our expertise. 

    Post implementation reviews

    Indicative criteria for when we will do a post implementation review 

    • An evidence assessment suggests that intermediate outcomes are consistently not being met or we are at risk of not achieving our ultimate outcomes, but further engagement with stakeholders or data analysis is required to determine what course of action should be taken. 
    • There may be times when a rule meets the criteria for an impact evaluation (see section on Indicative criteria for when we will do an impact evaluation) but we are unlikely to be able to identify a counterfactual or establish a causal link. In these cases, we may carry out a post implementation review instead. These may be planned in advance at the policy development stage.  
    • There is potential to act on lessons learnt from the review and these may be relevant to our future work.

     

    A post implementation review should assess whether the implementation of a rule has been successful, as measured by key changes, outcomes and discussions with stakeholders.  

    We expect that this process will require data and stakeholder engagement beyond that collected during the monitoring process and we may use external parties to help us with parts of the review. This means a post implementation review is likely to require more time and resource than an evidence assessment.  

    Typically, and in line with Government practice, we use a post implementation review to establish whether, and to what extent, the rule:  

    • has achieved its original objectives as set out in the consultation paper or policy statement 
    • has been implemented and complied with as intended 
    • has resulted in any unintended effects 
    • has objectives which are still valid 
    • is still required and remains the best option for achieving those objectives 
    • can be improved to reduce the burden on business and its overall costs 

    A post implementation review will typically seek to identify areas of a rule implementation and delivery process which worked well, areas which could be improved upon and how external factors have influenced the context of the delivery. As such, we expect to use the outcomes of a post implementation review to guide our decisions on whether rule changes or further guidance are required, as well as influencing the implementation and delivery of future interventions. 

    Our staff typically undertake post implementation reviews, often with input from independent experts, especially for larger reviews. 

    Impact evaluations

    Indicative criteria for when we will do an impact evaluation 

    • A rule addresses significant harms, generates market upheaval or has large ongoing costs to firms. 
    • A rule is a new intervention or we were uncertain over the outcomes when it was implemented. 
    • A rule is highly controversial or high profile.  
    • There is potential to act on lessons learned from the evaluation and these may be relevant to future work (such as extensions to policies), other markets or to make our CBAs more robust. 
    • We can identify relevant counterfactuals against which we can measure impact. This can be affected by factors such as external shocks to the market or the presence of multiple market interventions. 

    We also need to consider the availability of data, including data that we need from firms. We can factor data collection into our planning when we are developing the rules.

     

    An impact evaluation is our most rigorous tool for assessing the impact of our interventions. This type of review focuses on using causal methods to quantify impacts. They will be planned for in advance, at policy development stage, to ensure we collect the correct data.  

    We will make sure that we have a feasible plan for evaluation before implementing rules. A good design will allow us to analyse the causal impact of our rules, separating this from other changes in the market. This fundamentally relies on establishing a plausible counterfactual to measure what may have happened had we not intervened. In some cases, the design may also allow us to make statements about why the rule has had certain effects or the mechanisms through which there has been an impact. However, if a market has experienced a range of external shocks, there have been multiple interventions or data has not been collected at implementation, it will be very difficult or impossible to establish the impact of a particular intervention (see the Annex for more discussion of challenges for impact evaluations and how we may address them).   

    Impact evaluations are generally the most quantitative form of review but may draw on a variety of evidence, including qualitative tools. In conducting this form of review, we know that not all impacts may be easily quantifiable.  

    These evaluations are a demanding form of review in terms of the data requirements and analytical resource. We can only undertake them when sufficient time has passed to observe the full effects of a rule. Undertaking impact evaluations requires us to use a significant amount of our resources and often to make ad hoc data requests from firms. We must consider value for money in our work and that data requests can create a cost for firms. We will undertake them only when it is proportionate to do so, for a subset of our interventions. We also consider the best ways to collect the information to undertake the evaluation and balance the rigour gained against the costs this would entail. 

    As a guide, where we are planning to do an impact evaluation of a rule, we normally wait until about 3 to 5 years after implementing the rule. The exact timing will depend on various factors, such as the details of the rules, the relevant market and the scale of change. However, where there is a high cost associated with the rule or the rule is addressing a severe harm, we might review the rule sooner to ensure it has been implemented as planned and is addressing the harm. 

    It is also important that we can ensure the robustness and credibility of our impact evaluations where we make claims about the specific impact of our interventions. We will do this by either commissioning them externally or, where carried out by our staff, ensuring they are peer reviewed by external experts. We expect to publish most, if not all, of our evaluations, so they will also be subject to external scrutiny. 

    More information on our approach to impact evaluations is set out in the Annex. We invite feedback from stakeholders on the Annex.  

    Figure 4: Comparison of types of review

    Figure 4: Comparison of types of review

    Figure 4: Comparison of types of review (PDF)

    Our immediate priorities for review

    We are publishing this draft Framework in the context of significant work that has an impact on our Handbook. 

    The Consumer Duty (the Duty) introduces a more outcomes-focused approach to financial regulation. This means that we are likely to introduce fewer detailed and prescriptive rules in future where the Duty applies. Some existing rules in the Handbook may also become obsolete. This is something we will be considering as part of keeping our rules under review. 

    The repeal of retained EU law (REUL) and replacing this, where appropriate, with rules in our Handbook will allow us to review the effectiveness of REUL and to make changes. This is a significant exercise involving the review of around 35 policies for which the FCA has responsibility. It covers substantial areas of regulation like the Markets in Financial Instruments Directive (MiFID) and the Packaged Retail and Insurance-based Investment Products (PRIIPs) Regulation. Overall, we will be working to ensure that our rules are tailored to best suit UK markets and to meet our statutory objectives. The regulatory framework reforms section of our website has more information about our approach to this process.  

    While we anticipate that the repeal and replace process will engage a significant portion of our resources for the immediate future, we plan to continue to meet our commitment to undertake at least 1 impact evaluation a year (see Table 2 for more information on this type of review). 

    We also plan to monitor some of our new rules and, where appropriate, to respond to intelligence that shows that a rule may not be working as we intended. This may lead us to conduct a review of the rule, in line with the types of review set out in the Framework (see Table 2). Our decision to review a rule will depend on evidence that the rule may not be working as intended. We will also consider our resources and our organisational priorities, as set out in our Strategy and our Business Plan

    Challenges to undertaking reviews

    Post-implementation monitoring and review is important, but it is not always straightforward.  

    Reviews can be time-consuming and can also impose a cost on firms if we require more data or input from them. We need to balance the benefit of monitoring and review against the cost and to ensure we prioritise our resources to address the most pressing or significant harms.  

    It can be difficult to isolate the impact of our actions given the dynamism and complexity of the markets we regulate. Other factors, such as macroeconomic or technological change or the response of firms or consumers, may be responsible for changes. A key constraint to effectively carrying out causal evaluations like impact evaluations is being able to establish relevant counterfactuals against which we can measure our impacts. While we aim to address this by embedding evaluation into our policymaking from the start, sometimes events will intervene and it will be impossible to distinguish the effect of our rules from other market developments. 

    When assessing the outcomes of our rules, we need to ensure enough time has passed since the intervention to allow the rule to have been implemented and identifiable changes in behaviour to take effect, but not so long that things have moved on too much. So, in some cases, it may be preferable to review rules earlier by focusing on the success of the implementation or on intermediate outcomes rather than on ultimate outcome metrics. 

    One of the benefits of having different types of review available to us is that where we cannot overcome a challenge with one type, we may be able to use another. For example, we may be able to use theory-based evaluation (see Box 1) where we are not able to establish a causal link between the impact and our intervention through other more direct means.  

    Actions we can take after a review

    As well as providing valuable insight into how well a rule is working, the findings of a review can inform our wider work and approach. We may gain insight into how we can improve other interventions, such as supervisory work. Reviews also inform our approach to future rule-making, for example, our impact evaluations can help inform and improve the assumptions we make in our CBAs.  

    Once we have the findings of a review, we may decide to take any of the following actions: 

    Cases where a review shows there is a significant problem

    Where our initial review shows that there is a significant problem with a rule, we may want to move swiftly to address this. It is important that we meet our obligations as a public body to act fairly and reasonably and in line with our statutory processes for making, amending and, if relevant, revoking our rules. We also want to ensure that we do not cause uncertainty for stakeholders by suddenly or repeatedly changing our rules. So we will consider the options (such as an expedited consultation process or waiving or modifying our rules) on a case-by-case basis where the outcome of the review justifies swift action and the solution, and the method of adopting it, are appropriate.  

    Where the problem is significant, and the solution itself amounts to a significant intervention, we will follow our standard processes for formulating and implementing the policy solution. 

    Our approach to reporting

    We know that our stakeholders have an interest in understanding how our rules are working and what we learn from our reviews.  

    We expect to publish most of our larger post implementation reviews and impact evaluations, taking into account potential commercial sensitivities. This ensures transparency and credibility of the work and contributes to the body of public policy evidence on effective regulatory interventions. If the Treasury directs a review, there are certain reporting requirements that we must follow (see Government-directed reviews).  

    We will keep stakeholders updated on our other reviews through our wider reporting and communications. We want monitoring and review to be part of our ongoing policy development and improvement and so want to avoid it being overly resource-intensive by requiring a public report in every instance. However, there may be cases where we publish a more formal update on our review, for example, where a review has been more extensive or has shown that the rule is not working as intended. 

    Government-directed reviews

    The Treasury has the power to direct us to carry out a review of specified rules. In doing so, it may specify the timing and the scope and conduct of the review. It may require us to provide interim reports during the review. It can also direct that someone independent of the FCA should do the review.  

    Where the Treasury has directed us to review, we will work with it to determine the most effective way to do so. There are specific reporting requirements for these types of review. In a written report to the Treasury, we must explain our opinion on: 

    • whether the reviewed rules are compatible with our strategic objective, advance one or more of our operational objectives and advance our secondary objective 
    • whether and to what extent the rules are functioning effectively and achieving their intended purpose 
    • whether any amendments should be made to the rules and, if so, what those amendments should be 
    • whether any rules should be revoked, with or without replacement 
    • whether any other action should be taken and, if so, what that action should be