PFS welcomes Treasury Select Committee’s report on AI in financial services

Dr Matthew Connell, Director of Policy and Public Affairs at the PFS welcomes the Treasury Select Committee report, which highlights key risks associated with the use of AI in financial services.

AI-specific regulation

The risks presented by AI can be addressed through good governance and processes, without the need for AI-specific regulation. The measures professionals should take are captured in the Government’s five principles for AI regulation. These include the key concepts of safety & robustness, transparency & explainability, fairness, accountability & governance, and contestability & redress.

At the AI and vulnerable customers roundtable that the CII convened on 24 September last year, the FCA set out its position on AI in financial services, which is through a principles-based approach aligned to the Government’s principles. The FCA has also adopted a ‘tech-positive’ stance aimed at encouraging safe and responsible innovation.

The FCA also has the power to ensure that firms across the sector, many of whom are already adhering to the principles, are implementing AI resources through its Systems and Control rules, which require firms to ‘take reasonable care to establish and maintain such systems and controls as are appropriate to its business’.

AI and vulnerable customers

As detailed in the summary report the CII produced from the roundtable, there was recognition from the participants who were drawn from regulatory, sector, ethics, technology and lived experience experts, that AI solutions genuinely offer the potential to identify and support individuals in vulnerable circumstances, but simply adopting AI to keep pace, without adequate vulnerability management data infrastructure, governance frameworks, and a supportive culture is a recipe for failure. The CII’s Managing Customer Vulnerability guidance provides practical direction on building these foundational capabilities. Once these foundations are in place, deployment must be guided by clear commitments to fairness, transparency, accountability, and inclusion, principles that align with the UK government’s framework for responsible AI.

AI-generated advice

We agree that consumers are at serious risk of harm from AI-generated advice that appears to be based on expert analysis. This is because AI-generated advice is not backed by a professional firm authorised by the FCA. As a result, the advice does not come with vital protections, such as minimum training standards for professionals and access to the Financial Ombudsman Service in the event of a complaint.

The FCA, with support from professional bodies, can address this risk in two ways:

  1. 1, Police promotions that advertise AI-generated guidance as advice, and take action when providers breach the regulatory perimeter

2. Work with professional bodies and firms to explore how to measure the effectiveness of AI-based advice tools. This can potentially be done with the help of AI itself. It could help create tools that are safe for consumers to use, or tools that financial advisers and paraplanners can use to increase productivity.