Robo-advice based solely on artificial intelligence cannot be fully regulated because it is impossible to track the decision process, tech experts have warned.
The method of AI used in most decision-based or predictive scenarios, such as advice, is based on machine learning — a subset of AI which uses patterns and inferences within data to make predictions.
The AI learns from a training set of data and is eventually able to predict or decide the best financial product for an individual based on large amounts of data on historical advice and previous purchase trends and behaviours.
According to Ben Taylor, chief technical officer at AI firm Rainbird, such decisions or predictions will struggle in terms of financial regulation because the process is "unexplainable".
He said: “Large amounts of personal data can help predict what will happen in the future which, on the face of it, sounds ideal because most financial services firms are sitting on a lot of the data.
“But machine learning models are full of numbers and not interpretable for most humans.”
Therefore regulators would find it near impossible to understand what had caused incorrect recommendations or bad advice, raising questions about accountability and the way any future complaints could be handled, according to Mr Taylor.
He added: “Even if we can demonstrate that it’s giving good answers, you still can’t satisfy what the regulator will want and what consumers will want.”
This thought is backed up by the regulator. In a speech at an artificial intelligence ethics conference last month (July 16), Christopher Woolard from the Financial Conduct Authority said there was a growing consensus that algorithmic decision-making needed to be ‘explainable’ but said it was up for debate at what level that explanation needed to be — an expert, a chief executive or the consumer themselves.
He added that using a more interpretable algorithm could dull the predictive edge of the technology and hinder the innovation, which was one example of the "trade-offs we're going to have to weigh up".
Mr Woolard announced the FCA had partnered with the Alan Turing Institute to explore the transparency and explainability of AI in the financial sector to move the debate towards a better understanding of the practical challenges AI poses.
Andrew Firth, chief executive of Wealth Wizards, agreed that full robo-advice was “unregulatable” in its current form and said the industry did not know how to explain full robo-advice at this time.
But he added there was ‘explainable AI’ which integrated the technology with the work of human advisers and worked like a “hybrid” system.
For example, in regards to pension transfer advice, a human financial planner would decide the various factors at play — such as the yield of the current pension pot — but the machine learning would work out the ‘weighting’ of each factor by looking at historical cases.
According to Mr Firth, because a human has gone through the process of analysing the factors, it could be explainable and therefore, regulatable.