Building better foundations in AI

Speech by Jessica Rusu, FCA Chief Data, Information and Intelligence Officer, at The Alan Turing Institute’s Framework for Responsible Adoption of Artificial Intelligence in the Financial Services Industry (FAIR) event. 

Link to content

Speaker: Jessica Rusu, Chief Data, Information and Intelligence Officer 
Event: The Alan Turing Institute: Framework for Responsible Adoption of Artificial Intelligence in the Financial Services Industry (FAIR) event 
Delivered: 24 January 2023 
Note: This is a drafted speech and may differ from the delivered version 

Highlights 

  • AI Regulation in financial services should be built on a platform of collaboration 
  • The principles of inclusion and diversity of thought are key to building better AI 
  • Innovation will help lead to better AI regulatory outcomes 

On the nature of regulation

The Oxo Tower is an iconic London landmark, a defining feature of the South Bank.  

Although the Oxo Tower has been here for nearly a century, it is also a shapeshifter - adapting with the times, and even outsmarting the authorities when necessary. 

Originally a power station for Royal Mail, the building was bought by the Liebig Extract Meat Company – the makers of Oxo stock cubes - in the 1920s. 

The new owners wanted to display illuminated signs on the tower to promote their product. 

This kind of advertising, however, was banned by town planners. 

But the makers of Oxo would not be deterred. Architect Albert Moore came up with an ingenious idea to thwart the townhall bureaucrats; by installing new windows, each of which 'coincidentally' happened to be in the shape of a circle, a cross and a circle, spelling out the word OXO. 

I must admit, that is an inspired workaround. Not only did the design of the windows fit the art deco style of the time, but it also managed to promote the Oxo brand – without falling foul of Council planning restrictions.  

And nearly 100 years after its refurbishment, we are still reminded of the product. It’s a remarkable achievement – both for architecture and for marketing!     

As clever as the architect’s trick was, it holds an important lesson. 

Rules and regulations can have unintended consequences.

Therefore, collaboration and a clear focus on outcomes is key. Regulation should not deter innovation, but rather promote fair competition, protect consumers and promote the effective functioning of markets.  

The Alan Turing Institute’s Equality, Diversity & Inclusion Strategy 

The regulation of AI is currently the subject of much debate. Is regulation necessary for the safe, responsible, and ethical use of AI? And if so, how? 

This is an area where the Alan Turing Institute plays a key role, both in the UK and globally.  

By bringing together academia, industry, civil society and regulators, the Alan Turing Institute has created a platform for debate and opportunities for engagement, as well as supported vital academic research. 

The Turing Institute’s Equality, Diversity and Inclusion Strategy (EDI) for 2021-2024 acknowledges the unique power of this platform.  

On International Women’s Day in 2021, the Turing Institute published research into gender gaps in AI and data science, finding that only 22% of data and AI professionals in the UK are women, citing mounting evidence that the under-representation of women and marginalised groups in AI results in a feedback loop whereby bias gets built into and amplified by machine learning systems. 

Fortunately, the Alan Turing Institute, through its EDI strategy, is advocating for positive change - for a fairer, more equitable and accessible approach to the design and deployment of technology across the UK economy. 

The role of innovation and collaboration in regulation 

Just as the effective use of technology, such as AI, requires a more rounded perspective, so does effective regulation. The example of the Oxo Tower illustrates that even with regulation in place, good regulatory outcomes require more.   

For regulation to work as intended, it requires engagement with those affected by the rules and standards – in our case, not building architects, but financial services firms. 

At the FCA, we believe innovation plays a key role in delivering better regulatory outcomes. Our TechSprints, Policy Sprints, and Regulatory Sandbox events ensure that a diversity of perspectives and solutions are considered. 

Many will already be familiar with the Innovation Services of the FCA. Later I’ll be sharing an exciting new initiative that allows us to work with firms to help drive engagement whilst mitigating firms’ regulatory risk.

Data and Diversity 

Returning to the topic of Diversity, Equity, and Inclusion, this is an area where the Alan Turing Institute and the FCA have a lot in common. 

The financial services regulators in the UK – the FCA, PRA, and the Bank of England - want to see an industry that values diversity and inclusion. These values need to be built on a healthy culture where it is safe to speak up, an essential characteristic of firms that deliver consumer protection and support market integrity. 

Although there has been progress towards this goal over the last few years, and most firms have publicly committed to change, there’s still a lot of work to do. 

In July 2021, we published a joint Discussion Paper (DP) with the PRA and Bank of England, reviewing the current state of diversity and inclusion in the industry, and setting out areas for potential policy intervention. Our three key goals were:  

  • To provide an overview of the current industry landscape,  
  • To encourage positive industry actions, and 
  • To help develop a supervisory approach 

We also published a multi-firm review on diversity and inclusion last month. 

The review demonstrated that firms – even those with the best data – are not making full use of those insights to inform their diversity and inclusion strategies. The outputs from those strategies require better data to measure outcomes, something that is as true for diversity and inclusion as it is for AI. 

AI and Machine Learning survey 

Our shared interest in AI is another topic that unites the Alan Turing Institute and the FCA.  

I consider myself optimistic about the use of AI in financial services. As I’ve discussed before, AI has the potential to enable firms to offer better products and services to consumers, improve operational efficiency, increase revenue, and drive innovation. All of these could lead to better outcomes for consumers, firms, financial markets, and the wider economy. 

We recently published a survey with the Bank of England to better understand how ML and AI are currently being used, and get deeper insight on its adoption in financial services. The findings show that there is broad agreement on the potential benefits of AI, with firms reporting enhanced data and analytic capabilities, operational efficiency, and better detection of fraud and money laundering as key positives. 

The survey also found that the use of AI in financial services is accelerating- 72% of respondent firms reported actively using or developing AI applications, with the trend expected to triple in the next three years. Firms also reported that AI applications are now more advanced and embedded in day-to-day operations, with nearly 8 out of 10 in the later stages of development. 

While the use of AI can bring a range of practical benefits, it can also pose novel challenges for firms and regulators. Its use can amplify existing risks to consumers, the safety and soundness of firms, market integrity, and financial stability. This risk was reflected in our survey, with data bias and data representativeness identified as the biggest risks to consumers, while a lack of AI explainability was considered the key risk for firms themselves. 

AI Regulation on balance 

Considering the need for this balanced perspective on AI- of the benefits, risks, and the opportunities, as well as the increasing industry adoption - one of the most significant questions in financial services is whether AI in UK financial markets can be managed through clarifications of the existing regulatory framework, or whether a new approach is needed. 

This is the topic of a joint Discussion Paper with the Bank of England we recently published, which looked at this question as part of a much broader debate on how to regulate AI to ensure it delivers in the best interests of consumers, firms, and markets. This is a conversation that is taking place domestically as well as globally, and to which today’s event makes an important contribution.  

If you’d be interested in adding to that, the consultation for the AI Discussion Paper ends soon on 10th February and I would encourage you all to respond and share your thoughts with us on this vital debate. 

The role of governance 

As we talk about regulatory frameworks, we must also talk about AI governance and risk management.  

Effective governance and risk management is essential across the AI lifecycle, putting in place the rules, controls, and policies for a firm’s use of AI.   

Good governance is complemented by a healthy organisational culture, which helps cultivate an ethical and responsible environment at all stages of the AI lifecycle: from idea, to design, to testing and deployment, and to continuous evaluation of the model.  

When it comes to developing AI models, it matters who is ‘in the room’.   

At the FCA we consider the Senior Managers’ and Certification Regime (SMCR) as giving us the right framework to respond quickly to innovations, including AI.  

It creates an incentive to collect data to better measure the impact of the technology, including different demographics. This is important because there is a strong link between senior managers’ objectives and diversity and inclusion outcomes.

These are just some of the practical challenges associated with using AI, and the FCA and the Alan Turing Institute have launched a secondment scheme to collaborate on tackling some common practical challenges associated with the use of AI in financial markets. I very much look forward to the close relationship and seeing the results of this scheme. 

The FCA Digital Sandbox 

Continuing the topic of innovation in regulation, at the FCA we have the Digital Sandbox, an environment where new technology propositions can be tested; with access to a suite of tools to collaborate and develop proof of concepts, as well as access to high-quality synthetic data.  

The core features of the Digital Sandbox include:  

  • An integrated development environment 
  • Access to synthetic and publicly available data assets 
  • A collaboration platform to support networking and ecosystem development 

There is a strong link here with the Alan Turing Institute’s FAIR programme, which recognises that a major AI research challenge is resolving the tensions and trade-offs which arise when translating high-level principles into practical outcomes. 

The Digital Sandbox helps bridge this gap between theory and execution.  

Synthetic Data’s potential 

The development of the Digital Sandbox as a regulatory tool brings me to my final point – the potential role of synthetic data in the way we test, design, develop and regulate. 

As the Turing Institute’s FAIR programme recognises: 'Synthetic Data Generators (SGDs) [can] enable researchers to work with data in safe environments and to share and link data in settings when, currently, this is not possible due to regulatory or privacy constraints'.  

Synthetic data is an increasingly important area of research for the FCA. For the past five years, we have been exploring the potential for synthetic data to expand data sharing opportunities in a privacy compliant manner. We have done this through our TechSprints and the Digital Sandbox, as well as through our external engagement with industry and academia. We also currently observe the UK-US PETs (Privacy-Enhancing Technologies) challenge. 

We also know, from a Call for Input on Synthetic Data, which we published last year, that the challenges of accessing and sharing data are significant – particularly for smaller firms. 

We are, however, starting to take steps to address this challenge. I am very pleased to announce that after asking for input on the use of Synthetic Data in February last year, we will shortly be publishing the Feedback Statement.  

This will outline the key responses we received from respondents, including the potential for synthetic data to combat fraud and money laundering, and provide our thoughts around the potential role of a regulator in the space, including the need for guidance to build trust and promote responsible adoption of this technology. 

Within the next few weeks, we’ll also be publishing a Call for Interest in a Synthetic Data Expert Group (SDEG). This expert panel is modelled on the success of the AIPPF. It will be hosted and chaired by the FCA and will run for up to 24 months. It will be comprised of experts from industry, regulators and academia working on synthetic data. Its purpose will be to: 

  • Clarify key issues in the theory and practice of synthetic data in UK financial markets 
  • Identify best practice as relevant to UK financial services 
  • Create an established and effective framework for collaboration across industry, regulators, academia and wider civil society on issues related to synthetic data, and  
  • Act as a sounding board on FCA projects involving synthetic data 

The FCA is Innovating for better AI outcomes 

By investing in the Digital Sandbox, developing our regulatory framework for AI and building our synthetic data expertise, we are taking steps to enhance digital markets and promote beneficial innovation in the interest of consumers and markets in UK financial markets.  

We are grateful for the collaboration with the Alan Turing Institute - sharing a joint interest in technology as well as in diversity and inclusion. 

The challenges of designing and deploying beneficial AI are many. But the frameworks and tools are starting to emerge. Through collaborative exploration we develop shared understandings of what matters and how best to achieve the right outcomes for markets and consumers.  

Perhaps there are lessons in this for all of us – firms, academics, technologists and regulators – even, perhaps, the future urban planners of tomorrow: equitable outcomes require the inclusion of all. 

Fortunately for us, in the case of the Oxo Tower, the rules led to the creation of an aesthetically attractive, if not somewhat quirky, architectural landmark. And whilst aesthetics are happily not a concern in financial services regulation, it does, however, need to work for consumers and markets, and whether related to towers or AI, it’s key that they are built on solid foundations.