RegulationAug 6 2019

Full robo-advice 'impossible to regulate'

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
Full robo-advice 'impossible to regulate'

Robo-advice based solely on artificial intelligence cannot be fully regulated because it is impossible to track the decision process, tech experts have warned.

The method of AI used in most decision-based or predictive scenarios, such as advice, is based on machine learning — a subset of AI which uses patterns and inferences within data to make predictions.

The AI learns from a training set of data and is eventually able to predict or decide the best financial product for an individual based on large amounts of data on historical advice and previous purchase trends and behaviours.

According to Ben Taylor, chief technical officer at AI firm Rainbird, such decisions or predictions will struggle in terms of financial regulation because the process is "unexplainable".

He said: “Large amounts of personal data can help predict what will happen in the future which, on the face of it, sounds ideal because most financial services firms are sitting on a lot of the data.

“But machine learning models are full of numbers and not interpretable for most humans.”

Therefore regulators would find it near impossible to understand what had caused incorrect recommendations or bad advice, raising questions about accountability and the way any future complaints could be handled, according to Mr Taylor.

He added: “Even if we can demonstrate that it’s giving good answers, you still can’t satisfy what the regulator will want and what consumers will want.”

This thought is backed up by the regulator. In a speech at an artificial intelligence ethics conference last month (July 16), Christopher Woolard from the Financial Conduct Authority said there was a growing consensus that algorithmic decision-making needed to be ‘explainable’ but said it was up for debate at what level that explanation needed to be — an expert, a chief executive or the consumer themselves.

He added that using a more interpretable algorithm could dull the predictive edge of the technology and hinder the innovation, which was one example of the "trade-offs we're going to have to weigh up".

Mr Woolard announced the FCA had partnered with the Alan Turing Institute to explore the transparency and explainability of AI in the financial sector to move the debate towards a better understanding of the practical challenges AI poses.

Andrew Firth, chief executive of Wealth Wizards, agreed that full robo-advice was “unregulatable” in its current form and said the industry did not know how to explain full robo-advice at this time.

But he added there was ‘explainable AI’ which integrated the technology with the work of human advisers and worked like a “hybrid” system.

For example, in regards to pension transfer advice, a human financial planner would decide the various factors at play — such as the yield of the current pension pot — but the machine learning would work out the ‘weighting’ of each factor by looking at historical cases.

According to Mr Firth, because a human has gone through the process of analysing the factors, it could be explainable and therefore, regulatable.

Paul McNamara, chief executive of Evalue, also thought there were concerns around how AI decisions based on big data sets could become regulated but thought there were more “urgent and bigger opportunities” than “broad data use”.

Mr McNamara said it was clear in UK regulation that advice required an in-depth understanding of the consumer, so if AI could help take full account of that person’s circumstances and attitude to risk, it could provide advice within regulatory terms.

He added: “What we need from AI is for it to use in-depth, personal data rather than broad data about the overall market. This could help advisers with elements of the advice journey such as the fact find or getting data to show this understanding."

Another issue raised by Mr Taylor was the problem surrounding ‘implicit biases’ held in data which could manifest itself in financial decisions.

He said: “This technology is built around data and it’s very hard, if not impossible, to know what part of that data is being used and what biases that data holds.”

Mr Taylor had seen examples where a firm was training a model to predict risk on data which had an implicit bias on gender and ethnicity and that this model then underwrote policies in line with this bias.

Mr Firth agreed there was risk of this bias in all aspects of AI, but said firms could mitigate it by “picking the right data”.

He said Wealth Wizards used a small set of data — selected by choosing good advice cases across different circumstances and sections of society — to manage the bias as best it could.

According to Mr McNamara, bias was a particular problem in financial planning data sets as such data could be biased “up-market” due to the extent of data based around wealthy individuals.

Mr Taylor also thought there was a potential risk of consumer detriment if the regulator didn’t “keep up” with changes in technology and the production of AI, but Mr Firth disagreed, adding “if anything, AI is going too slowly in the marketplace”.

The regulator was becoming “much more enthusiastic” about innovation and technology, according to Mr McNamara, as long as it was based around the principle of “better outcomes for consumers”.

A number of advisers innovating within the AI space have sprung into the market lately, with Rosecut being the latest robo-adviser to launch.

But robo-advisers have come and gone in recent years, many citing cost problems for their departure.

In May Investec closed its Click and Invest robo-advice business following two years of losses which amounted to about £32m.

imogen.tew@ft.com

What do you think about the issues raised by this story? Email us on fa.letters@ft.com to let us know.