FCA announces intention to regulate AI services

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
FCA announces intention to regulate AI services
FCA, BoE, and PRA will regulate critical third parties including AI services (Photo: Tara Winstead/Pexels)

The Financial Conduct Authority has announced its intention to regulate critical third parties, including AI services, for the UK financial sector.

In a speech at the Economist Impact Finance Transformed event, FCA chief executive, Nikhil Rathi, stated that he wanted to "mitigate" the potential systemic impact that could be triggered by a critical third party.

He detailed that, with so many financial services using critical third parties, "we must be clear where responsibility lies".

 AI is having an impact on almost all areas of life  Medius invoice fraud expert, Paul Ellis

However, Rathi stressed that any regulation in this area must be proportionate enough to foster beneficial innovation but robust enough to avoid a loss in trust and confidence

He detailed that one way to achieve this, is to work with the FCA through its upcoming AI sandbox.

He explained that, while the FCA does not regulate technology, it does regulate the effect on and the use of tech in financial services and it is already seeing AI-based business models coming through its authorisations gateways.

With these developments, Rathi stated it was “critical” the FCA does not lose sight of its duty to protect the most vulnerable and to safeguard financial inclusion and access. 

“Our outcomes-based approach not only serves to protect but also to encourage beneficial innovation,” he said.

What can AI do for me?

In the speech, Rathi identified a number of opportunities that the use of AI could provide to financial services, such as a boost to productivity.

He explained that in April, a study by the National Bureau of Economic Research in the US found that productivity was boosted by 14 per cent when over 5,000 customer support agents used an AI conversational tool.

Better tackling of the advice gap was also identified as a benefit, Rathi stated it could facilitate "better, more accurate information being delivered to everyday investors, not just the wealthiest customers who can afford bespoke advice".

The ability to hyper-personalise products and services to people to better meet their needs and the ability to tackle fraud and money laundering more quickly, accurately and at scale were also identified as potential benefits.

What's the downside?

However, Rathi voiced caution about the risks AI may pose to financial services, pointing out that misinformation fueled by social media can impact price formation across global markets.

He evidenced this by pointing to May 22 this year, when a suspected AI generated image showing the Pentagon in the aftermath of an explosion spread across social media just as US markets opened.

As a result, global financial markets were “jolted” until US officials clarified it was a hoax.

Another risk identified in his speech was AI leading to cyber fraud, cyber attacks and identity fraud increasing in scale, sophistication, and effectiveness.

Rathi explained that “as AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate at the same time”.

In response, Medius invoice fraud expert, Paul Ellis, said: “AI is having an impact on almost all areas of life, and we’re really only at the beginning of what those issues could be.

“AI could be a powerful tool for criminals, so it’s important workplaces stay on top of the latest developments and make sure staff are up to date as well.”

tom.dunstan@ft.com

What's your view?

Have your say in the comments section below or email us: ftadviser.newsdesk@ft.com