Insights

FCA AI Sprint - Trust, Regulatory Clarity, and Collaboration

FCA AI Sprint - Trust, Regulatory Clarity, and Collaboration

Apr 28, 2025
Download PDFDownload PDF
Print
Share

In January, the FCA held its inaugural AI Sprint.  115 participants from the financial services industry, academia, regulators, technology providers, and consumer representatives came together for two days to discuss the opportunities and challenges of AI in financial services.

The FCA has now published a summary of the discussion, highlighting four common themes:

  1. Trust and risk awareness: I have promoted this to the first theme, as the most critical and cross-cutting.
  2. The importance of regulatory clarity: The onus here is on the FCA to clarify and build-out existing requirements.
  3. Collaboration and coordination domestically and internationally:  There is no shortcut, multiple stakeholders needs to work together.
  4. Safe AI innovation through sandboxing: The value of the regulatory sandboxes is well-established.  The FCA is now focusing on launching its “Supercharged Sandbox”, offering innovators greater computing power, infrastructure, datasets, and mentorship to support the testing and validation of AI solutions.  Sounds exciting!

So, what else is in the pipeline, alongside the Supercharged Sandbox?  Unsurprisingly, the actions pick-up on the themes above, with the FCA planning to focus on:

  • International collaboration, to explore cross-cutting considerations and influence on the development of international standards.
  • Areas of uncertainty, where regulation could be restricting safe and responsible AI adoption.  There is no commitment to address this if (when) identified, but I hope that this will be the natural next step.
  • Communications and stakeholder engagement, through the AI Lab and AI Spotlight. Data protection and privacy was identified as an area of potential uncertainty; next week, on 9 May, the FCA is holding a joint roundtable with the Information Commissioner.

The eagle-eyed will notice that these next steps do not cover trust - but the FCA has not side-stepped this issue.  The FCA has published a separate blog post by Colin Payne, FCA Head of Innovation Services.  Trust and trustworthiness in AI is arguably the holy grail and is pervasive when it comes to the future of AI.  Payne captures this nicely.

"First, trust isn’t just a buzzword. It’s the whole deal when it comes to embracing AI. And trust isn’t just about protection, it’s also about growth. I’m not saying to completely trust what the computer says – this is still very new technology. But this is about demonstrating that we as a sector can embrace AI safely and responsibly to address those challenges and deliver on massive potential. With that trust, firms can get senior buy-in to experiment, and consumers will engage with AI-driven services. Trust in AI is like trying to build a house from the roof down – you need solid foundations first. Neither the FCA nor firms can establish this trust alone, but together we can create the environment needed for AI to flourish. - Colin Payne, FCA Head of Innovation Services"

Meet The Team


Samantha Paul

Samantha Paul
+44 (0) 20 3400 3194

Meet The Team


Samantha Paul

Samantha Paul
+44 (0) 20 3400 3194

Meet The Team


Samantha Paul

Samantha Paul
+44 (0) 20 3400 3194
This material is not comprehensive, is for informational purposes only, and is not legal advice. Your use or receipt of this material does not create an attorney-client relationship between us. If you require legal advice, you should consult an attorney regarding your particular circumstances. The choice of a lawyer is an important decision and should not be based solely upon advertisements. This material may be “Attorney Advertising” under the ethics and professional rules of certain jurisdictions. For advertising purposes, St. Louis, Missouri, is designated BCLP’s principal office and Kathrine Dixon (kathrine.dixon@bclplaw.com) as the responsible attorney.