FCA concerned about firms not tackling tech risk

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
FCA concerned about firms not tackling tech risk

Speaking at a conference on artificial intelligence ethics in the financial sector today (July 16), the FCA’s executive director of strategy and competition, Christopher Woolard, said some firms "haven't done any thinking" around the issue of risk in technology which was "obviously a concern".

He said the FCA backed new developments such as AI but needed to balance this with preventing consumer harm.

Mr Woolard said the use of AI for customer-facing technology in the firms it regulated was very much in the exploration phase and otherwise largely employed for back office functions.

He added that most of those who led financial services firms were aware of the need to act responsibly, with larger firms seeming more risk averse than new entrants to the market.

The firms the FCA is concerned about were those that had not thought about the issue at all.

Mr Woolard said: "If firms are deploying AI and machine learning, they need to ensure they have a solid understanding of the technology and the governance around it.

"This is true of any new product or service, but will be especially pertinent when considering ethical questions around data."

He told the conference, which was hosted by the Alan Turing Institute, the FCA wanted to see boards asking themselves, 'what is the worst thing that can go wrong' and providing mitigations against those risks.

He added that the City-watchdog would not have a universal approach to AI across financial services, as the impact and possible harm would take different forms in different markets and would therefore have to be dealt with on a case-by-case basis.

He said: "The risks presented by AI will be different in each of the contexts it’s deployed. 

"After all, the risks around algo trading will be totally different to those that occur when AI is used for credit ratings purposes or to determine the premium on an insurance product."

Despite this, the FCA does not want awareness of regulatory and consumer risk to act as a barrier to innovation in the interests of customers, Mr Woolard said.

For example, in its regulatory sandbox — which launched in 2015 to allow businesses to test innovative products, services and business models without facing all of the usual regulatory consequences — the FCA saw a number of tests relating to digital identity.

Such propositions use machine learning to help businesses verify the identity of their customers digitally, bypassing the need to go into a branch and have a cashier check whether their ID is genuine.

Mr Woolard said this was good for competition but that it could be more effective if sophisticated techniques could be deployed.

Other success stories from the FCA’s sandbox included the financial planning app Multiply, which was given the green light by the regulator earlier this month after an 18-month testing process.

Several of the big banks have also taken part in the advice unit arm of the FCA sandbox with a view to launching robo-advisers.

Aside from that, a number of advisers innovating within the AI space have sprung into the market lately, with Rosecut — providing investment focused financial advice for the middle market — being the latest robo-adviser to launch.

But robo-advisers have come and gone in recent years, many citing cost problems for their departure.

In May Investec closed its Click and Invest robo-advice business following two years of losses which amounted to about £32m.

In today’s speech, the FCA accepted that although firms were used to its way of regulating, how this looked in practice for AI was untested.

For example, Mr Woolard said there was a growing consensus that algorithmic decision-making needs to be ‘explainable’ but it was up for debate at what level that explanation needed to be — an expert, a chief executive or the consumer themselves.

He added that using a more interpretable algorithm could dull the predictive edge of the technology and hinder the innovation, which was one example of the "trade-offs we're going to have to weigh up".

Mr Woolard announced the FCA had partnered with the Alan Turing Institute to explore the transparency and explainability of AI in the financial sector to move the debate towards a better understanding of practical challenges AI poses.

Public trust is also vital to the development of AI, as processes such as open banking — where the consumer allows a firm access to their banking information — can help innovation flourish, according to the FCA.

But Mr Woolard pointed out that processes like open banking relied on public trust and the public seeing the value data can create for them.

A study last year showed less than a third of customers trusted open banking and would not sign over access to their financial data even to trusted companies such as Apple or Google.

Mr Woolard said: "A key determinant of future competition will be whether data is used in the interests of consumers or used by firms to extract more value from those consumers.

"As the market in data grows and machine learning continues to develop, firms will find themselves increasingly armed with information and may be tempted into anti-competitive behaviours."

He added firms needed to stay consumer-centric and must keep asking themselves ‘is this morally right’ rather than ‘is this legal’.

imogen.tew@ft.com

What do you think about the issues raised by this story? Email us on fa.letters@ft.com to let us know.