AI: Flipping the coin in financial services

Speech by Jessica Rusu, FCA Chief Data, Information and Intelligence Officer, at the City and Financial Global AI Regulation Summit 2023.

Link to content

Speaker: Jessica Rusu, Chief Data, Information and Intelligence Officer 
Event: City and Financial Global: The AI Regulation Summit 2023
Delivered: 5 October 2023
Note: This is a drafted speech and may differ from the delivered version 

Highlights 

  • We’re at a pivotal junction for how we approach more than just AI alone
  • Digital infrastructure, resilience, consumer safety and data are vital to getting AI integration right
  • Beneficial innovations from AI will only materialise via regulation
  • It is our responsibility to ensure the safe and responsible adoption of AI in financial services

AI - two sides of the coin

Imagine you’re holding a coin in your hand - something actually not very common, as I don’t usually carry cash with me anymore, which serves as a pertinent example of how technology has already shaped the financial habits of our everyday lives.

So taking this coin in my hand. As a data scientist we like certainty, binary outcomes. Heads, you win, or tails, you lose.

There are many experts and professionals in the tech community that feel this way about Artificial Intelligence (AI), giving as much weight to the benefits of AI as to the potential harm to humanity.

Perhaps more pressing, for the present moment, should the financial services industry be embracing and adopting AI, or should they steer clear?

Going from that image of a single coin now to the entire financial services system - would you still be willing to take the chance?

AI has the potential to transform the way we manage our finances, and is becoming pivotal in shaping the global economy. On one side of the coin, we have the shiny prospects of AI-powered innovation, promising greater operational efficiencies and accessibility in financial services, increasing revenues and driving innovation.

On the other side of the coin, we have a whole host of potential risks. We are at a key moment now - we have options around deciding where to take AI.

Or, to use the analogy of the coin; right now the coin is currently spinning in mid-air, and we can influence the outcome. This is a pivotal moment.

Why AI is only the beginning of the conversation

Today’s event is focused on AI. However, if we focus too narrowly on AI alone, we miss the opportunity to connect the dots with the broader issues related to digital infrastructure, a growing reliance on the cloud and other third-party tech providers, as well as the important role that good quality data plays in the adoption of AI and how all this impacts consumer safety.

Let’s explore these 4 areas - areas where it is our responsibility to decide how the coin falls, or how the penny drops.

Building digital infrastructure with strong foundations

Most AI deployments are run on the cloud and powered by large data. These interdependencies create digital infrastructure and third party-related stability risks that must be considered.

Many of the large incumbent Big Tech firms are major suppliers of UK technology infrastructure, including AI, and supply technology to regulated entities in financial services and other sectors. This creates digital infrastructure or Critical Third Parties (CTP) risks.

The FCA’s emerging approach to Critical Third Parties aims to address the systemic risk posed by the UK financial services firms’ reliance on the services of certain third-party providers. If a CTP’s services were to be disrupted, this could lead to significant impact on firms and financial infrastructures, threatening stability, resiliency, and confidence in the UK financial system.

We are currently reviewing responses to the CTP Discussion Paper. We aim to consult on potential rules and guidance relating to providers of critical services under the Financial and Services Markets Act later this year. 

Safe and secure AI is dependent on well-functioning digital infrastructure, one that protects the operational resilience and cyber-security of UK financial services.

Expectations of resilience

The FCA is technology-neutral and pro-innovation. But, we are very clear when it comes to the effect and outcomes these technologies can have. We expect firms to be fully compliant with the existing framework, including the Senior Managers & Certification Regime (SM&CR) and Consumer Duty.

Resilience and safety are a key focus for the UK Government AI Safety Summit taking place in November. I share the Government’s views that we need to think more about safety and security when it comes to frontier technology.

The key message is that firms remain responsible for their own operational resilience, including any services that they outsource to third parties. That is not changing, and firms are still required to meet their commitments no matter how they choose to deliver their services.

Consumer risks and AI scams

We know that AI scams are already on the rise. Audio deepfake technology, vishing scam calls, and biometric theft pose real risks to consumers.

Imagine you get a call from your child telling you they’re in some kind of trouble and need money - and it really does sound like them. Would you help? Are you flipping the coin on trust with your loved ones?

Use of deepfakes on social media can be used to create individual consumer harm, or used to manipulate markets. And it’s not just consumers at risk from AI scams - firms both large and small are also at risk for tailored and sophisticated AI-powered cyber-attacks.

Data considerations and questions

On the topic of data, I have spoken previously about model risk management, and the risks that AI exacerbates with sample bias, model drift, and the black box effect.

When considering the role of data in AI, we must also consider the question of ethical data usage.

One of the defining features of AI is its ability to process large volumes of data, and to detect and exploit patterns in those data.

Just because we have the ability to process the data, should we? And when is the hyper-personalisation of models helpful versus hurtful?

Data considerations are of paramount importance to the safe and responsible adoption of AI. Therefore, the role of data in AI means that responsible AI depends upon data quality, data management, data governance, as well as data accountability and ownership structures and - of course - data protection.

The role of regulation and governance

It is important to remember that while the FCA is a technology-agnostic regulator, there are already regulations and frameworks in place to facilitate the safe and responsible implementation of AI in financial services.

The SM&CR, in particular, has a direct and important relevance to governance arrangements and creates a system that holds senior managers ultimately accountable for the activities of their firm, and the products and services that they deliver – including their use of technology and AI, insofar as they impact on regulated activities.

Our approach to consumer protection around AI is based on a combination of the FCA’s Principles for Businesses, other high-level detailed rules, and guidance, including the Consumer Duty.

Coming into effect on the 31 July this year, the Consumer Duty requires firms to play a greater and more positive role in delivering good outcomes for retail customers, including those who are not direct clients of the firm.

The Consumer Duty also includes cross-cutting rules pertaining to retail customers, requiring firms to act in good faith, avoid causing foreseeable harm, and enable and support retail customers to pursue their financial objectives.

We believe that these frameworks provide us the tools we need to work with regulated firms to address material risks associated with AI, and provide both a context for the regulation of the technology and create incentives for the right outcomes.

Beneficial regulation leads to beneficial innovation

So what would the responsible adoption of AI look like? What key benefits are we hoping to nurture?

It may enable firms to offer better products, improve operational efficiency, increase revenue and drive innovation - we saw some amazing examples of AI put to use in our global GFIN Greenwashing TechSprint, where participants developed real proofs of concept using image recognition and other AI techniques to identify greenwashing in financial services.

It could also help solve specific challenges, like tackling the advice gap with better, more accurate information delivered to everyday investors.

Large Language Models (LLMs) in particular could help improve customer service or offer opportunities for those consumers currently excluded from the insurance market by creating more tailored offerings.

These are just a few examples of the potential benefits associated with the use of AI in financial markets and how the technology can deliver tangible and material benefits, but only if the standards on safety, resilience, data and infrastructure are met.

Examples of beneficial AI in practice

So how is the FCA using AI already?

AI-based models can help us tackle fraud and identify bad actors.

Our Advanced Analytics unit is using AI and Machine Learning (ML) in providing us additional tools to protect consumers and markets. We have developed web-scraping and social media monitoring tools that are able to detect, review and triage potential scam websites, which we use to proactively monitor.

The use of big data and advanced analytics, including fuzzy matching and entity resolution enables us to provide enhanced Management Information (MI) and key risk indicators to our authorisation, supervision, and enforcement teams. This includes the development of an in-house synthetic data tool for Sanctions Screening Testing that uses fuzzy matching capabilities to identify sanctioned individuals, a data-driven, proactive approach to testing screening solutions which previously relied on less efficient techniques.

And we’re not keeping all the data to ourselves. We are collaborating with industry, and have onboarded around 300 public and synthetic datasets as well as 1000-plus Application Programming Interfaces (APIs) onto our Digital Sandbox to support firm innovation.

One example of our use of synthetic data is in our money laundering detection efforts. We take real-world money laundering cases and create synthetic datasets for innovators to use in their AI anti-money laundering (AML) identification tools.

Ensuring we win the digital coin toss

And for all of these areas, collaboration, both domestic and international, will be increasingly important to encourage the safe and responsible adoption of AI in UK finance markets. This is particularly true as many UK firms operate internationally.

I am encouraged by what we have heard from firms in response to our AI Discussion Paper.

What we are hearing is that firms welcome the legal certainty that a regulatory framework brings that is principles-based, technology-agnostic and outcomes-driven. The Feedback Statement from our AI Discussion Paper will be published this month which will highlight these insights.

We know AI has the potential to transform financial services. It offers benefits ranging from enhanced customer experiences and better consumer outcomes, faster fraud detection, and a financial landscape that adapts and evolves faster than ever before.

But we know that this must be balanced against the potential risks also, with responsible adoption being paramount, and the need for firms to adhere to the existing regulations in order for beneficial innovations to materialise.

It falls on all of us, as innovators, leaders and regulators of our financial systems, to ensure that we act as stewards to shape the role of AI in the financial services industry.

It may be that the proverbial coin that is AI is still in the air, with the outcome hanging in the balance. We can, however, determine the fate of this digital coin toss through collaboration, our commitment to beneficial innovation and ensuring the right guardrails are in place.