Insights
AI Regulation in Financial Services: Turning Principles into Practice
Dec 18, 2025Summary
That desire for balance was reinforced in early December 2025 when Rathi reaffirmed that the FCA will not introduce AI-specific rules, citing the technology’s rapid evolution “every three to six months”. Instead, the regulator is doubling down on its principles-based, outcomes-focused approach, encouraging firms to innovate while committing to intervene only in cases of “egregious failures that are not dealt with”. This stance signals a shift toward adaptive oversight and a collaborative regulator–industry relationship, rather than rigid prescription.
So, where does the regulatory perimeter now sit? Where should firms concentrate their efforts when planning for responsible AI adoption? And what is in store for 2026?
The UK’s incremental, not prescriptive, regulatory approach
Despite growing political attention on the safe development of advanced AI – led by the rebranded AI Security Institute (AISI) within the Department of Science, Innovation and Technology – the UK financial regulators have resisted calls for an AI-specific rulebook. In their recent oral evidence before the Treasury Committee, the FCA again confirmed that their approach remains technology-neutral, principles-based, and outcomes-focused, relying on existing frameworks such as Consumer Duty, Senior Managers & Certification Regime (SM&CR), and operational resilience rules.
Jessica Rusu, FCA Chief Data, Information and Intelligence Officer, told the Treasury Committee that the regulator does not intend to “introduce prescriptive AI rules” but will embed AI oversight within current conduct and prudential standards, focusing on fairness, transparency, and accountability. Meanwhile, David Geale, FCA Executive Director for Payments and Digital Finance, reinforced that explainability and governance for AI models – particularly where decisions affect consumers or market integrity – remain non-negotiable.
This stance reflects a deliberate choice: supporting innovation without stifling growth, while reserving the option to tighten expectations through guidance rather than statute. This is in line with the government’s recommendation to regulators to consider their regulatory interventions in light of its ambition to prioritise growth. A key part of this is a stronger growth duty placed on regulators.
It also positions the UK in contrast to the EU and US. The EU’s AI Act introduces prescriptive obligations for “high-risk” systems, while the US leans toward sectoral guidance and a current trend to prevent states legislating on AI to avoid a fragmentary (and potentially burdensome) regulatory ecosystem in the US. The UK’s pro-innovation stance, reinforced by AISI’s research agenda, aims to keep regulation agile – although firms should expect incremental tightening, particularly around auditability and consumer protection.
AI moves from pilot to performance – with important caveats
The pace of adoption has accelerated dramatically. Published in November 2024, the Bank of England and FCA’s third survey of AI and machine learning in UK financial services showed 75% of firms already using AI, with a further 10% planning to adopt it within three years. Foundation models – including large language models (LLMs) – accounted for 17% of use cases, though most deployments remain low materiality.
Published in September 2025, Lloyds’ Financial Institutions Sentiment Survey reported that 59% of institutions now see measurable productivity gains from AI, up from 32% a year earlier. Over half plan to increase investment in 2026, and nearly half have established dedicated AI teams.
The evidence considered as part of the recent Treasury Committee inquiry into AI use in financial services reveals several common use cases:
Fraud detection and APP scam prevention
The British Insurance Brokers’ Association (BIBA) uses AI to process large volumes of data at speed, enabling faster and more accurate fraud analysis. Similarly, the Electronic Money Association (EMA) uses AI-driven models to identify unusual transaction patterns that may indicate fraudulent activity, helping differentiate legitimate transactions from suspicious ones in real time.
In the payments space, Mastercard applies its AI capabilities and network-wide view of transactions to support banks in predicting scams and identity theft. These tools enhance early detection and prevention, reinforcing consumer protection and trust in digital payment systems.
AML/KYC compliance and transaction monitoring
The FCA has partnered with the Alan Turing Institute on the AML and Synthetic Data Project, designed to enhance money laundering detection through advanced analytics. The initiative uses real anonymised transaction data from high street banks, augmented with AI capabilities to create a fully synthetic dataset that mirrors real-world patterns. This approach allows for the testing and development of more effective detection models without compromising customer privacy, paving the way for scalable, privacy-preserving solutions in AML and KYC compliance.
Cybersecurity threat modelling
UK Finance notes that firms are increasingly using AI to detect and respond to potential cyberattacks more efficiently. For example, security analysts can use AI tools to classify suspicious emails and identify phishing attempts, enabling faster incident response and reducing the risk of breaches. By automating threat detection and prioritisation, AI enhances resilience against evolving cyber threats while freeing up human resources for higher-value security tasks.
Customer service automation and chatbots
AI-driven customer service automation has grown exponentially over the past two years. In 2024 alone, NatWest Group’s AI-powered digital assistant, “Cora”, handled more than 11 million customer interactions, demonstrating the scale and efficiency of conversational AI in financial services.
UK Finance’s use of AI in customer engagement goes beyond basic chatbot functionality. Its tools use historical customer data to personalise interactions, identify suitable products and deliver targeted rewards offers. These data-driven insights are improving customer retention and satisfaction.
Back-office optimisation (document generation, marketing content)
Zurich Insurance UK reports its most significant use of AI is in these areas, deploying tools to streamline data extraction, route emails to the correct recipients and flag missing information in correspondence. These applications not only accelerate administrative workflows, but also enhance accuracy and free up resources for higher-value tasks, such as client service and strategic decision-making.
Early-stage robo-advisory tools, with human-in-the-loop safeguards
Lloyd’s Market Association employs an augmented underwriting process where human underwriters remain central to decision-making. AI tools assist by triaging submissions, scoring risk, and providing risk-specific insights. This boosts efficiency without removing human judgment from critical decisions.
Although the examples above illustrate AI’s growing role in day-to-day operations, its use remains largely confined to non-critical functions – such as marketing, client communications, and internal knowledge management – rather than core banking or trading decisions. However, both the FCA and Bank of England anticipate a shift toward agentic AI in core decision-making, such as credit underwriting, portfolio optimisation, and risk modelling.
This evolution raises important questions about consumer confidence and trust in AI-driven financial services. While recent research suggests consumers are broadly receptive to AI for low-risk applications such as fraud detection, confidence declines sharply for high-stakes decisions like loan approvals. Key concerns include data privacy, algorithmic bias, and loss of human interaction. As a result, consumers are increasingly demanding transparency and meaningful human oversight. For now, the trend is toward augmented, not fully autonomous, AI decision-making.
The regulators’ perspective on emerging risks
Alongside the developing use cases, there are unquestionable risks associated with the use of AI – especially generative and agentic AI. The Treasury Committee’s October session with the FCA and Bank of England highlighted three supervisory priorities:
- Transparency and explainability – Firms must articulate how AI models reach decisions, particularly in lending, insurance, and fraud detection.
- Accountability – The FCA’s position remains that there will be no dedicated Senior Manager Function holder responsible for AI. Instead, responsibility for AI-driven outcomes sits firmly within the existing accountability regime under the SM&CR. Firms must allocate clear oversight of their AI usage within the current SM&CR responsibilities and delegating to algorithms does not dilute liability.
- Systemic risk monitoring – The Bank of England’s Financial Policy Committee is assessing whether widespread AI adoption could amplify market shocks through correlated behaviours or model failures.
MPs pressed regulators on whether current frameworks adequately address bias, concentration risk, and third-party dependencies. The FCA acknowledged these as “live issues” and signalled that guidance on audit trails, and human-in-the-loop protocols is likely in 2026.
Beyond these specific concerns, the October oral evidence made it clear that regulators are also monitoring a wider range of emerging risks across the financial system. These relate to:
- Bias and fairness – Discriminatory outcomes in credit scoring or insurance pricing remain a key concern.
- Cybersecurity threats – AI can enable more sophisticated attacks, increasing both the scale and pace of incidents and introducing new forms of attack such as advanced phishing and deepfakes. It can also create vulnerabilities in model supply chains.
- Third-party concentration – Heavy reliance on a handful of AI infrastructure providers creates systemic risk. The CMA is considering designating select cloud providers as having strategic market status, following its cloud services market investigation. Similarly, the EU launched market investigations in November 2025 under the EU’s Digital Markets Act to assess whether there are practices which limit competitiveness and fairness in the cloud computing sector.
- Model complexity and “hidden” models – Validating AI outputs and preventing hallucinations in generative systems remain ongoing challenges for firms and regulators alike.
- Operational resilience – Firms must ensure AI failures do not cascade into systemic disruption. In 2026, we expect the UK Treasury to designate several critical third parties for enhanced scrutiny, particularly where their services underpin systemically important market functions.
- Market integrity risks – Algorithmic collusion, herding behaviour and misinformation could amplify volatility or trigger flash crashes, making collusion harder to detect.
These risks will shape the FCA’s guidance and supervisory focus in 2026.
The regulators as AI users
The FCA and Bank of England are not just observers and supervisors of firms’ AI usage — they are adopters. As we discussed in our recent article AI for Growth – FCA commits to being increasingly tech positive, the FCA is embedding AI into its own operations as part of its 2025–2030 strategy to become a “smarter regulator”.
Current applications include:
- Predictive AI in the Supervision Hub to assist agents with real-time knowledge retrieval
- An AI Voice Bot to triage consumer queries and direct them to the appropriate agency (FCA, FOS, FSCS)
- The experimental use of LLMs to process unstructured data, identify patterns, and streamline authorisations
- Advanced AI analytics, which are being applied to trading data to detect potential misconduct
- AI Lab initiatives, including AI Live Testing, which allows firms to validate models under regulatory oversight
The FCA’s own AI adoption reinforces its expectation that firms embed explainability and governance into their practices. But it also means firms should anticipate more data-driven supervision and faster detection of misconduct.
Where should firms’ strategic focus be now?
With no AI-specific rulebook, firms must navigate adoption through existing obligations – Consumer Duty, SM&CR, operational resilience, and data protection – while addressing emerging risks and the challenges of implementing and embedding AI workflows successfully. The regulatory tone remains principles-first, prescriptions-later, but scrutiny is intensifying. Firms should also assess whether the EU’s operational resilience rules apply to their UK operations where there is sufficient EU nexus, as these requirements may affect the use of AI tools embedded in IT service delivery packages.
The following three strategic priorities will help firms navigate AI responsibly while meeting regulatory expectations:
- Governance and explainability: Boards and senior managers must ensure AI-driven decisions are transparent, auditable, and aligned with Consumer Duty obligations. Accountability under the SM&CR remains non-negotiable.
- Third-party oversight and exit planning: With the UK regulators consistently warning that AI could become a critical dependency, firms should strengthen their vendor due diligence, contractual safeguards and migration strategies to avoid vendor lock-in and operational risk.
- Bias and Fairness Controls: FCA research published in January 2025 highlights bias risks in language models and credit scoring. Firms should embed bias audits and fairness testing into their model governance, particularly for consumer-facing applications.
What practical steps can firms take to manage day-to-day risk?
To meet regulatory expectations and manage AI responsibly, firms must translate the following risks into concrete controls across their operations:
- Consumer Duty: Make sure AI-driven products deliver fair value and avoid foreseeable harm.
- SM&CR: Assign clear accountability for AI governance.
- Operational resilience: Stress-test AI dependencies and maintain contingency plans.
- Data management: Validate data lineage, quality, and privacy compliance.
- Third-party dependencies: Prepare for future designation under the Critical Third Parties (CTP) regime; expect mapping, scenario testing, and incident response playbooks.
- Exit management risks: Assess portability and replicability early to avoid vendor lock-in.
- Bias and fairness: Implement bias audits and fairness testing throughout the model lifecycle.
- Consumer inclusion: Monitor vulnerability indicators and mitigate exclusion risks.
To further strengthen their position, firms should also:
- Establish AI risk committees and embed bias audits into the model lifecycle.
- Document human-in-the-loop controls for high-impact decisions.
- Engage with the FCA’s AI Lab to test models and anticipate supervisory expectations.
- Prepare for future guidance on audit trails and explainability, expected by the end of 2026.
By embedding governance, resilience, and fairness into their AI strategies now, firms will not only satisfy regulators, but also build competitive advantage in an increasingly AI-driven market.
Related Capabilities
-
Financial Regulation Compliance & Investigations
-
Litigation & Dispute Resolution