clock icon 6 minutes reading time
Innovation Hub

AI Live Testing: The use of AI in UK financial markets - from promise to practice

Ed Towers, head of department, advanced analytics and data science unit and Henrike Mueller, manager, AI strategy

We want the UK to be a place where beneficial technological innovation can thrive to support growth. So how can we build confidence in AI so consumers and markets benefit?

Artificial Intelligence (AI) is reshaping industries, including financial services. It has the potential to transform decision-making and customer experiences in UK financial services for the better. But it also raises concerns about how it can be used safely and responsibly. This can slow the pace of innovation if left unanswered as well as introduce new risks.

That's why we’re proposing AI Live Testing – a practical, collaborative way for firms and the FCA to explore methods to assure AI systems together. Here, we set out why you should apply to take part.  

The promise of AI  

Using AI can result in:  

  • Smarter decision-making: Models can process huge volumes of data to assess risk, automate compliance, and tailor financial products.
  • Stronger protections: Machine learning systems are spotting anomalies and stopping fraud before it affects consumers.  
  • Better experiences: Natural language models and predictive analytics can personalise financial products to better meet consumer needs.  

But this innovation depends on more than just computing power and data. Crucially, it also depends on confidence that AI can be used safely and responsibly in UK financial markets. Otherwise, innovation will stall.  

We want the UK to be a place where beneficial technological innovation can thrive to support growth and competitiveness. The question then becomes: how can we build confidence in AI so that consumers and markets benefit?  

The regulatory challenge: clarity without closure

Yet, this is where financial services regulators face a tricky balance. On the one hand, at the FCA we recognise that firms need regulatory clarity on AI – a predictable framework to guide their investments, operations, and risk management. 

On the other hand, we also recognise that if we, as the UK's financial services regulator, move too quickly or narrowly, we could unintentionally stifle innovation. Also, any regulatory action could potentially become out-dated very quickly as the technology evolves at pace.  

A collaborative path forward: AI Live Testing

AI Live Testing is a practical, hands-on way to build trust, reduce risk, and accelerate safe and responsible innovation. This isn’t a compliance exercise. It’s a partnership.  

It provides a structured but flexible space where firms can test AI-driven services in real-world conditions, with appropriate regulatory support and oversight (see the Terms of Reference (PDF)). It will help firms who are further along in AI development and are ready to use in markets. This will allow both sides to:

  • Understand how an AI system performs in live market contexts – in practice and not just in theory.
  • Identify potential risks early and adapt controls accordingly.
  • Explore potential assurance methods that are grounded in how AI is used in a specific market context; not just what’s ideal on paper.
  • Share insights that feed into smarter, more future-proof AI approaches.

AI Live Testing complements the FCA's Supercharged Sandbox, which is focused on firms who are in the discovery and experiment phase of their AI product's development.    

This joined-up approach mirrors the spirit of innovation itself: test, learn, iterate, improve. It's a core part of the FCA's AI innovation offering through the AI Lab and set out in our Engagement Paper on AI Live Testing.

Why you should take part

We know that complex questions around how AI systems will perform in the real world can significantly slow the pace of innovation.  

That's why we want to help firms by providing a place where they can test. AI Live Testing can offer you:

  • Clarity on expectations: Get early, practical feedback on what regulators are looking for when it comes to, for example, considerations around explainability, fairness, performance, and controls. Less guesswork.  
  • Confidence in deployment: Test how your AI system behaves under real operating conditions to ensure it works in a safe and responsible way.  
  • A chance to shape best practice: Your experience in AI Live Testing helps inform insights and best practice for the rest of industry. You can help make sure that regulatory approaches are grounded in operational reality and make a genuine difference for the better for consumers and markets.

What AI Live Testing isn't  

FCA AI Live Testing is not designed to become a tool to approve or certify that an AI model is OK to use. We don’t believe this is within the FCA's remit. It also isn't an enforcement or supervisory function – it's entirely voluntary.

From model evaluation to AI system evaluation  

Much of the broader AI assurance debate has focused on how AI models operate. But when it comes to AI deployment, the model is just one part of a broader AI system. This is particularly true for financial services, where we need to understand how AI systems behave in real-world market situations.  

We define an AI system as an ecosystem of people, processes and technologies designed to deliver intended outcomes. It includes the model, data pipeline, human oversight, testing and governance. Considering the AI system as a whole is key to reducing bias, improving accuracy, and increasing transparency.

And yet, our work to date suggests that some of the most important questions are more fundamental and even come before considering issues such as bias or accuracy. They are:  

  • What is the intended goal and outcome for the AI system – at a point in time and over time?
  • What are the key risks relative to use case context, as well as considerations around regulatory compliance?
  • How can these risks be best mitigated and what residual risks are acceptable?  
  • Do we have a shared framework to assess these risks?  

These are not academic questions. They’re about ensuring that the AI system is tested for its real-world impact – including its impact on consumers and markets (see ‘AI for growth – how the FCA can help’). These are live issues that respondents to our Engagement Paper on AI Live Testing are considering. We will be sharing more details when we publish the Feedback Statement in September 2025.  

AI Live Testing is a bridge between beneficial innovation and assurance. It gives you a structured way to take bold steps forward, with the reassurance that you're building on solid ground.

We’re inviting financial firms of all sizes to take part. If you’re building something ambitious and want to get it right, this is your chance to test, learn, and lead.

Join us in shaping the safe and responsible future of AI in UK financial markets. 

Find details of how to apply for AI Live Testing. The deadline for applications is Wednesday, 20 August 2025.