What the FCA thinks about Artificial Intelligence

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
What the FCA thinks about Artificial Intelligence

Artificial intelligence (AI) is likely to be increasingly widely deployed by financial services companies, but the Financial Conduct Authority (FCA) is concerned that the lines of accountability within the industry could be blurred as the technology develops. 

In a speech at the Economist Impact Finance Transformed Event, on 12 July, the regulator’s chief executive Nikhil Rathi, said AI presented “many opportunities”, including the ability to “hyper personalise” products and potentially cut the advice gap.

This is because technology enables advice to be delivered at a lower cost, which he refered to as “closing the advice gap.”  

But Rathi noted that there remained many unanswered questions around how the financial services industry could use AI in a way that was not detrimental to clients. 

He said they were already seeing business models which relied on AI coming to them for approval, both from new firms and from within the firms it already regulated.  He is particularly focused on the risk of cyber crime, which could increase further, as AI is deployed within firms. 

Rathi added: “We still have questions to answer about where accountability should sit: with users, with the firms or with the AI developers? And we must have a debate about societal risk appetite.   

"What should be offered in terms of compensation or redress if customers lose out due to AI going wrong? Or should there be an acceptance for those who consent to new innovations that they will have to swallow a degree of risk?”  

The chief executive said the FCA would soon publish a regulatory sandbox around artificial intelligence. 

Arun Kumar, regional director at Manage Engine commented that the FCA was right to raise a red flag to banks, investors, and insurers as the recent AI boom has put scammers into overdrive.

Kumar added: "We’ve seen the rise of cyber fraud, cyber-attacks, and identity fraud as a result.  We need a dual defence, with regulators and businesses joining forces to put the necessary, regulation and security practices to keep pace with the level of attacks.

"We shouldn’t [just] be thinking about the current threats. The next wave of AI-cyber-attack innovation needs to be foreseen and carefully managed. Strong security practices and an AI fuelled cybersecurity defence can bolster cyber defences. 

"We need to fight AI driven cyber-attacks with AI driven cyber defences. For example, artificial intelligence and machine learning helps to detect any abnormal behaviour by collating data from various sources. [It also helps to] correlate and mitigate attacks proactively."

david.thorpe@ft.com