Financial Conduct Authority  

FCA concerned about firms not tackling tech risk

FCA concerned about firms not tackling tech risk

The Financial Conduct Authority is concerned about firms who are not thinking about the risk technology can pose to consumers.

Speaking at a conference on artificial intelligence ethics in the financial sector today (July 16), the FCA’s executive director of strategy and competition, Christopher Woolard, said some firms "haven't done any thinking" around the issue of risk in technology which was "obviously a concern".

He said the FCA backed new developments such as AI but needed to balance this with preventing consumer harm.

Mr Woolard said the use of AI for customer-facing technology in the firms it regulated was very much in the exploration phase and otherwise largely employed for back office functions.

He added that most of those who led financial services firms were aware of the need to act responsibly, with larger firms seeming more risk averse than new entrants to the market.

The firms the FCA is concerned about were those that had not thought about the issue at all.

Mr Woolard said: "If firms are deploying AI and machine learning, they need to ensure they have a solid understanding of the technology and the governance around it.

"This is true of any new product or service, but will be especially pertinent when considering ethical questions around data."

He told the conference, which was hosted by the Alan Turing Institute, the FCA wanted to see boards asking themselves, 'what is the worst thing that can go wrong' and providing mitigations against those risks.

He added that the City-watchdog would not have a universal approach to AI across financial services, as the impact and possible harm would take different forms in different markets and would therefore have to be dealt with on a case-by-case basis.

He said: "The risks presented by AI will be different in each of the contexts it’s deployed. 

"After all, the risks around algo trading will be totally different to those that occur when AI is used for credit ratings purposes or to determine the premium on an insurance product."

Despite this, the FCA does not want awareness of regulatory and consumer risk to act as a barrier to innovation in the interests of customers, Mr Woolard said.

For example, in its regulatory sandbox — which launched in 2015 to allow businesses to test innovative products, services and business models without facing all of the usual regulatory consequences — the FCA saw a number of tests relating to digital identity.

Such propositions use machine learning to help businesses verify the identity of their customers digitally, bypassing the need to go into a branch and have a cashier check whether their ID is genuine.

Mr Woolard said this was good for competition but that it could be more effective if sophisticated techniques could be deployed.

Other success stories from the FCA’s sandbox included the financial planning app Multiply, which was given the green light by the regulator earlier this month after an 18-month testing process.