Future evolution of AI technology
How AI could evolve in the future, including the development of more powerful, autonomous and agentic systems, assessing the whole AI value chain.
We review how AI may reshape retail financial services for consumers, firms, markets and regulators by 2030 and beyond, and seek your views across 4 interrelated themes.
We review how AI may reshape retail financial services for consumers, firms, markets and regulators by 2030 and beyond, and seek your views across 4 interrelated themes.
The Board of the FCA has asked me to conduct a review into how advances in artificial intelligence (AI) could transform retail financial services.
This Review builds on the strong foundations the FCA has laid through its AI Discussion Paper, AI Sprint, and AI Lab. The FCA has also outlined its approach to AI, confirming it does not plan to introduce extra regulations. Instead, it will rely on existing frameworks, which are principles-based and focused on outcomes and mitigate many of the risks associated with AI.
It has combined this approach with proactive industry engagement and support, developing its groundbreaking Supercharged Sandbox supported by NVIDIA and launching a new AI Live Testing service for firms – a practical, hands-on way to build trust, reduce risk, and accelerate safe and responsible innovation. Together, this range of policy, technical and supervisory approaches support the financial services industry to responsibly adopt AI.
With a fast-moving technology like AI, I fully support encouraging continued innovation and adoption. And as the FCA’s leader responsible for delivering the Consumer Duty and its competition obligations, I see AI as a major opportunity to improve outcomes on both fronts. AI should also enable the UK to enhance its global standing, attract inward investment, and support growth and competitiveness. Understanding how FCA regulation can continue to support these opportunities is therefore vital.
Of course, we also need to ensure AI adoption and innovation occurs while ensuring markets work well and consumers are protected. AI adoption in financial services also introduces growing risks, including sophisticated AI-enabled fraud and identity abuse, algorithmic bias, and opaque decision-making. It could also potentially reduce consumer agency and introduce new forms of market concentration or systemic vulnerability. Over the longer term, increasingly autonomous and interconnected AI systems may amplify existing risks and create new ones.
As I begin this Review, the technological landscape is evolving at a remarkable pace. It is entirely plausible that we will see widespread use of agentic AI systems, neuromorphic computing and quantum capability. We may even encounter forms of non-human intelligence surpassing human reasoning. These technology changes will take place in the context of the growth of digital finance, including blockchain and smart contracts, tokenisation and digital assets.
None of this is science fiction; all of it sits squarely within the range of what could happen in the next decade. At the same time, change may prove more incremental. Human behaviour and institutional inertia can slow adoption, and expectations of transformation often exceed practical reality. But these developments could bring widespread change to financial services, so assessing the potential impact is critical.
My purpose is not to claim certainty about the pace or direction of AI development. I want to explore a range of plausible futures and offer clear recommendations to ensure the FCA remains prepared, adaptive and able to support a thriving, innovative UK financial services sector.
I will consider retail financial services, consumer outcomes, consumer protection, and UK financial services competitiveness and growth. Wholesale markets and broader societal impacts, such as employment effects, are out of scope, although these areas will be considered where relevant to retail markets. For example, widespread availability of AI investment tools could increase retail participation in capital markets.
Through this Engagement Paper, I pose questions on our 4 interrelated themes.
How AI could evolve in the future, including the development of more powerful, autonomous and agentic systems, assessing the whole AI value chain.
How these developments could affect markets and firms, including changes to competition and market structure and UK competitiveness.
The impact on consumers, including how AI could improve outcomes, create new risks, change behaviours, and alter demand and provision of financial services.
How financial regulators may need to evolve to continue ensuring that retail financial markets work well.
I am seeking a wide range of views including financial firms, consumer groups, trade associations, technology providers, politicians and academics. I and my project team of economists, technologists, supervisors, policy and consumer experts will engage widely, drawing on academic research, international developments and your insights.
Please respond by Tuesday 24 February 2026. You can do this using our online response form. Any other contributions can be sent to us at [email protected].
I will report to the FCA Board in the summer, setting out recommendations to help the FCA continue to play a leading role in shaping AI-enabled financial services. This will culminate in an external publication to support informed debate.
It has been a privilege to serve the FCA across competition, supervision and policy over the past seven years. I hope that this open invitation to contribute will help shape the UK’s next chapter of leadership in financial services.
AI is not new to financial services. It has been a key feature for the past decade or more – including algorithmic trading, underwriting, credit decisioning, fraud detection and chatbots. But the launch of publicly available generative AI models has brought it to the forefront of public consciousness, and we have seen rapid adoption. Millions of consumers use them to navigate their financial lives and more than 75% of UK financial services firms are now using AI.
We may be approaching a genuine inflection point in how AI technology interacts with financial services. Advanced, multimodal and agentic AI systems could reshape market dynamics, alter how financial products are designed and distributed, and transform how consumers engage with firms. In some scenarios, there could be rails to enable machine-readable, programmable forms of digital assets (or money) to be exchanged and settled in real-time, with AI potentially providing decision-making autonomously.
The impact of AI on retail financial services is still at an early stage, but it is moving rapidly. The extent of its adoption will depend on the confidence of both consumers and firms that these technologies can deliver explainability, fairness, resilience and accountability.
From a consumer’s perspective, we see increasing numbers relying on AI to take material decisions on their behalf, mediate their interactions with financial markets, and finally automate their financial lives. Right now, AI is mostly used as an assistive tool to explain concepts and options. Others already use them as advisory systems that recommend actions. As consumer trust increases, we can see consumers delegating decisions to autonomous agents that act on their behalf within agreed limits. This shift could be gradual, with each stage increasing the level of delegation handed over to their AI proxies.
For payments, investments, lending and insurance, AI may act as a personal intermediary or 'proxy', driving new value propositions and customer opportunities. Over time, richer consumer data – potentially supported by Open Finance - could support far more detailed virtual models ('digital twins') of individuals, or even organisations, allowing firms to test and improve outcomes in a controlled way. It could also empower AI agents further, allowing them to not merely 'do things for me' but 'act as me'.
When implemented well, AI can enable firms to innovate at pace and better meet customer needs. It can enhance good outcomes by improving personalisation, improve customer understanding and support and driving better quality services at lower cost. Agentic AI in particular could support people to automatically optimise their household finances, reducing inertia through automatic switching and potentially encouraging a saving and retail investment culture, which could help us respond to demographic changes in the UK.
But AI can also amplify risks: bias, discrimination, exclusion, opaque decision-making (particularly when multiple AI models interact), misleading or hallucinatory advice and erosion of consumer trust. It also significantly expands the potential for cyber-enabled threats, such as model manipulation, and fraud and financial crime risks, such as deepfake technologies and synthetic identities to exploit onboarding or decision-making processes. It could also introduce new risks if decisions are increasingly delegated to AI agents, including reduced consumer agency, reduced consumer understanding, unconscious manipulation and further decrease financial literacy. For these reasons, consumer trends will be a central theme in this Review.
AI could enhance the value and provision of financial products and services in novel ways, through greater efficiency, speed, scale and hyper-personalisation. At the same time, it may disintermediate some business models and blur others. This could transform retail financial market dynamics in ways that are not yet known.
For example, large incumbents may benefit from scale, data and resources, but smaller digitally native firms may be more agile in harnessing AI, seizing this opportunity to scale, although they could be challenged in turn by new 'AI native' entrants.
AI could also cause market power to shift from financial services firms towards those AI firms who control consumer interfaces, own consumer data, and design AI agents. This could move value chains beyond our regulatory perimeter – or they may enter financial services directly. Drawing on the FCA’s work on the potential competition impacts of Big Tech entry, and the Competition and Markets Authority’s (CMA) Assessment of AI foundation models, we will look at how AI could strengthen the advantages of firms with large amounts of data or control over key digital platforms, and what this might mean for competition in financial services.
Returning to the concept of an emerging 'proxy world' of agentic AI, if this develops, competition could shift significantly. Some firms may thrive in a world where AI agents, rather than people, compare products and make choices on consumers’ behalf. Others may struggle to adapt, as firms increasingly need to appeal to AI agents programmed to optimise for price, value, risk or fit rather than brand loyalty or marketing.
These changes could lower barriers to entry in some parts of the market, while reinforcing economies of scale in others. Whether this leads to more dynamic competition in the interest of consumers, such as lower prices and better value services will depend on how the technology evolves, how firms choose to adopt it, and the regulatory and market design decisions taken in the coming years.
Alongside these opportunities, we should expect new and evolving risks. The use of AI may enable more sophisticated forms of financial crime, fraud and manipulation. Bad actors will exploit the same technological advances that support innovation. Firms and regulators alike will face new challenges in detecting, preventing and mitigating harms.
This is just one reason why the Review will consider how supervisory approaches and regulatory technology capabilities may need to evolve, and how AI can assist regulators just as it assists firms. Supervising and supporting AI-enabled firms may increasingly require a continued increase in the FCA’s use of AI, deeper technical expertise, and new approaches to testing, assurance and monitoring, and a greater shift to a data and tech-enabled regulatory approach.
While we do not seek to generate new prescriptive rules, it is vital that the FCA anticipates how AI will change the markets it regulates. Many challenges – data use, transparency, discrimination, access, exclusion, pricing and service quality – are not new. But AI changes their scale, speed and interdependencies. Shifts toward AI-driven decision-making, new models of outsourcing, international inconsistency in standards, coordination problems between agents, and evolving liability expectations all demand thoughtful, forward-looking analysis.
We also consider whether the future regulatory approach, including whether existing frameworks – such as the Consumer Duty, SM&CR, Operational Resilience and the nascent Critical Third Parties regime – remain flexible and sufficiently outcomes focused for an AI-enabled future and how quickly articulation of how they apply in an AI-enabled world might be desired. We will explore the trade-offs between regulatory certainty, speed and pace of change and innovation and the levers for the FCA to support and encourage innovative and safe adoption of AI by firms to realise the opportunity.
For example, we will also examine questions of operational resilience, outsourcing, and reliance on hyperscalers, through the lens of our Critical Third Party Regime and the chains of accountability between firms, senior managers, technology providers and model developers. The FCA is not responsible for wider oversight of AI or model providers. But given it will be increasingly utilised by both firms and consumers, some of whom might not fall under the FCA’s jurisdiction, it will be important for the FCA to collaborate and coordinate with other regulators, notably the CMA, Information Commissioner's Office (ICO) and the Digital Regulation Cooperation Forum (DRCF) domestically, and internationally to continue to meet its objectives while recognising limitations.
This section sets out the 4 themes this review will be asking questions on, and areas where we would appreciate your input.
Where possible, please provide evidence to support your responses and detail on how your response applies.
You do not have to answer all the questions – please feel free to focus on those relevant to the information you have or the points that you would like to make. It may be helpful if you read all the questions first.
You can give us your views using our online response form. You can also email us at [email protected]. Please respond by Tuesday 24 February 2026.
From 2030, the AI landscape will plausibly be defined by systems that are more autonomous, adaptive, and interconnected than ever before. This would bring a shift from isolated actions to integrated ecosystems, where intelligence operates across technology and infrastructure, and participants and their roles adjust accordingly.
We are moving towards agentic AI that may become capable of independent decision-making and continuous learning and increasingly designed to act on behalf of the consumer. The continuing rise of multimodal AI will enable systems to integrate text, speech, images, and other data, creating richer and more nuanced consumer journeys.
Advances in model and system architectures, along with developments in hardware, with quantum computing somewhere on the horizon, are likely to push the boundaries of speed and efficiency. The emergence of AI assurance platforms – tools that monitor, audit, and evaluate AI systems – are expected to become critical for trust and resilience.
The supply chain for AI – including data, platforms, and infrastructure – will continue to evolve, with hyperscalers playing a foundational role, and specialist providers driving innovation and disruption. As these technologies intersect with complementary developments such as distributed ledger technologies (DLTs), blockchain based ecosystems, smart contract enabled automation, tokenisation, digital identity, and the broader shift toward Open Finance and decentralised finance (DeFi), we expect to see new opportunities alongside emerging threats.
The coexistence of both open and closed capabilities will further shape the trajectory of this landscape, ultimately influencing customer choice, data portability, and access to emerging financial services. This tension will determine whether markets evolve toward more inclusive, competitive, and consumer-empowering models, or whether value concentrates among a small group of dominant platforms with asymmetric control over data, algorithms, and distribution.
To help us consider the path ahead, we invite your reflections on the future of AI technologies, their interplay with the broader ecosystem, and how you think your organisation will exploit and respond to these changes. We also ask for your wider views on the UK’s competitiveness in AI.
As AI technologies develop and are increasingly embedded into firms, it promises to reshape business models, market structure and competitive landscape. AI could affect the roles of both established, emerging and perhaps entirely novel players with the potential for value to shift to those who control AI-enabled interfaces.
AI is already introducing material efficiencies across core activities. We foresee continued transformation. On the supply side, in payments, AI agents could increasingly initiate, route and optimise transactions on consumers’ behalf. In retail banking and consumer investments firms, AI-native operating models may dramatically lower servicing costs and enable hyper-personalised propositions. In insurance, continuous risk assessment and automated claims handling could reshape pricing and underwriting. In lending, AI-driven credit decisioning could alter who can access credit and on what terms. In addition, cost reductions may be, or may not be, passed on as lower prices for consumers.
On the demand side, AI could transform how consumers engage with financial services. AI agents may increasingly construct, rebalance and execute investment portfolios on behalf of consumers. This could compress margins on traditional advice while raising questions about suitability, transparency and potential market herding if many consumers use similar AI systems. For banking, credit and insurance products, AI agents could compare, recommend and switch products, reducing customer inertia as well as challenging the business models of various financial intermediaries.
If consumers increasingly delegate financial decisions to AI agents, firms will potentially have to compete in novel ways for the attention of AI systems. This could make markets more competitive, if AI agents require firms to develop novel value propositions. It could, however, create new forms of market power, if AI providers favour certain firms, or if personalisation creates lock-in. These AI-enabled services could also capture significant value from financial firms while remaining outside the regulatory perimeter.
All these factors will require adaption from financial firms to stay relevant, and it is unclear whether this will favour large incumbents, smaller challengers, or see the rise of new players, such as specialist AI intermediaries, or 'AI native' firms with fundamentally different cost structures and business models.
Across all these sectors we see common underlying forces at work: potential data feedback loops that could entrench market leaders; economies of scale in AI that could raise barriers to entry; network effects that could create 'winner takes most' dynamics; and shifting switching costs that could either intensify or dampen competition.
Whether this leads to more dynamic competition in the interest of consumers, such as lower prices and better value services will depend on how the technology evolves, how firms choose to adopt it, and the regulatory and market design decisions taken in the coming years.
We are looking for evidence on how these forces are operating, how they might reshape market structure and the customer relationship, and whether value could migrate outside the regulatory perimeter.
This theme will explore how consumer experiences, expectations and outcomes in retail financial services could evolve as AI becomes embedded in everyday life.
By 2030, consumers could increasingly be interacting with financial services through AI-mediated interfaces rather than directly with firms. In this scenario, personal financial agents may manage day-to-day money flows, optimise borrowing, insurance and investment choices in real time, and coordinate decisions across products and providers. In this environment, consumer expectations are likely to shift towards highly personalised, automation-driven financial journeys, with reduced tolerance for friction, poor value or opaque outcomes.
If managed well, this has the potential to improve financial outcomes and tackle the issue of poor financial literacy and capability in a novel way. We know though that other harms could emerge, such as reliance on unregulated AI for guidance or advice, vulnerability to mis-selling, and risks created by model bias or hallucination. Indeed, as consumers delegate more decisions to AI-enabled intermediaries, new forms of AI-driven fraud, manipulation and laundering may emerge, exploiting synthetic identities, deepfake interactions and automated criminal ecosystems. We will consider what 'good outcomes' might mean in a world where consumers increasingly delegate financial decisions and their ability to assess whether their AI agents are acting in their interests may be limited.
Our work will consider research into how consumers use AI tools to navigate their financial lives to build a clearer picture of current and emerging practices. We will explore different attitudes and behaviour by digital confidence (high/low), financial resilience (stretched/comfortable), scam exposure (recent/none), delegation style (prefers automation vs control). Some consumers may prioritise convenience, whilst others may prioritise control or cost. We will also address vulnerability and be mindful of the experiences of people excluded from AI tools and assess whether there may be a natural ceiling on their adoption.
We welcome views on what these changes may mean for existing consumer policy, consumer protection and the future of the regulatory perimeter.
By 2030, the FCA should remain an outcomes-based regulator – one that enables innovation, manages risks, and protects consumers as AI becomes more embedded across retail financial services. We do not aim – it would be premature – to explicitly recommend major changes to regulation or law.
Existing frameworks such as the Consumer Duty, the Advice Guidance Boundary reforms, the Senior Managers & Certification Regime (SM&CR), Operational Resilience requirements and the nascent Critical Third Parties (CTP) regime all provide a flexible foundation for an AI to be implemented in retail financial services. The task is not to rewrite these frameworks, but to consider how the way we apply them may need to adapt as AI changes the pace, scale and nature of markets, firms and consumer experiences. We also note the interest of the Treasury Select Committee in the FCA’s current regulatory approach as part of their recent inquiry.
This will be informed by the scenarios for 2030 that we see in the other themes. For example, we will assess how relevant senior managers under SM&CR can continue to discharge their responsibilities for the deployment and maintenance of AI systems, and how these responsibilities might need to evolve under different future scenarios.
We must also consider how existing consumer protection rules and policy may be shaped by AI in the longer term. For example, our vulnerability guidance forms part of our approach to the Consumer Duty. AI has great promise for people who might struggle with their finances or come to take decisions at a time when they are vulnerable or need support. However, there might be new ways in which firms or others might be able to target vulnerable customers using AI. Similarly, the industry is trending towards hyper-personalisation to boost their competitive advantages by leveraging AI. What could this mean for existing regulatory expectations?
Moreover, to remain effective in an AI-enabled future, regulatory approaches will need to support innovation while responding to new anti-money laundering (AML) and consumer protection challenges, including autonomous fraud, AI-powered social engineering and identity compromise. Clear expectations on accountability, auditability and the safe deployment of high risk AI could be increasingly important.
In undertaking this theme, we can learn from the progress of others. We will consider the emerging approaches of other regulators and the wider emergence of regulation and legal approaches to AI globally, both by financial services (FS) regulators and non-FS regulators. This will be through partners and stakeholders such as the DRCF and UK Regulators Network and international bodies such as International Organization of Securities Commissions (IOSCO), Global Financial Innovation Network (GFIN), Bank of International Settlements (BIS) and others. We will also seek views or consider outputs from a range of AI-focussed bodies such as the AI Security Institute and Responsible AI UK.
We will also consider best practice in other regulatory frameworks designed to respond to other emerging fields of exponential technology such as Crispr technology in synthetic biology. This will enable us to strategically complement current work at the FCA on optimising our regulatory approaches for AI to ensure the FCA’s readiness for this potential transformational change in retail financial services.
More broadly, this theme will therefore examine how the FCA’s regulatory model – authorisation, supervision, policy, and enforcement – may evolve in response. AI will likely accelerate the scale and speed at which risks develop, particularly when multiple autonomous AI systems interact. The FCA will need to ensure its processes and systems can respond at pace. It may need to deploy its own AI agents to act faster, enabled by better data to ensure markets continue to work well. It will need to ensure its own rule making is sufficiently flexible and proactive to adapt to the challenges of fast paced changes and consider how to mitigate any potential information asymmetries between the FCA and regulated parties.
Finally, as firms improve the way they use and present data, regulators will also be able to interpret information more quickly and act sooner. This may change the balance between preventative (ex-ante) and reactive (ex-post) supervision. We will also consider the consequences for enforcement and redress mechanisms, which may also need to handle cases where harms scale rapidly, involve autonomous systems, or require analysis of complex, evolving technical evidence. It is likely that our AI will require the FCA to enhance its enforcement toolkit as a result.
We therefore invite examples of how best the FCA can take advantage of the opportunity advanced AI presents to manage risks, as well as how the authority can become a world-leading regulator enhanced itself by AI and new approaches to data.