OpinionJan 12 2024

'Lessons for advisers from the Post Office scandal'

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
'Lessons for advisers from the Post Office scandal'
(Dan Kitwood/Getty Images)
comment-speech

Irish comedian Frank Carson had a catch phrase many will still recall today: "it’s the way I tell’em". A catch phrase that could easily retrospectively apply to the Post Office management and Fujitsu.

The scandal surrounding the appalling treatment of 736 sub–postmasters and mistresses over so many years by the Post Office (interestingly an FCA-regulated entity) should be ringing alarm bells for all financial adviser firms – smaller one’s in particular regarding the responsibilities that the FCA places upon firms in their drive toward the use of technology to deliver lower cost advice to their clients. 

There are some ‘learnings’ for advisers (that all enveloping regulatory excuse phrase) that can be had, as well as some positive ‘outcomes’, another of those regulatory phrases that are rolled out usually when the very opposite happens.

In 2017 the late Stephen Hawking issued a chilling warning about the imminent rise of artificial intelligence. During that interview, Hawking warned that AI will soon reach a level where it will be a "new form of life that will outperform humans".

Advisers should reflect on the possible consequences of bringing the delivery of financial advice into the 21st century. After all, with the smart phone, tablet, AI, ChatGPT and virtual reality all breaking through boundaries, why should financial advice not find itself in the vanguard of change?

It should work, could work, but will not work until something very simple yet clearly requiring a considerable volte-face takes place.

The buck of responsibility always stops with the financially weakest part of the process, the advisory firm.

Steve Jobs said that: “Older people sit down and ask, 'What is it?' but the child asks, 'What can I do with it?”.

Smart technology like Google, Siri and Alexa exists. It is readily available in the average home. Algorithm-based analytics are there, right now, to deliver for the mass market an automated method of providing the average family with the ability, beyond just shopping, to self-medicate all sorts of things and self-prescribe a solution. 

Why not do this with their financial advice needs? The elephant in the room of progress is the word ‘advice’. 

In the financial services world where products are advised/delivered/sold/distributed by the intermediated channel, the buck of responsibility always stops with the financially weakest part of the process, the advisory firm.

In the case of the Post Office and Fujitsu’s Horizon software, for years the linkage of blame fell on that part least likely to be able to defend themselves, the small business sub-postmasters as the software could not possibly be wrong.

I see this exact scenario being easily linked to small financial adviser firms who invested in technology only to find that by adopting its use to provide access to lower cost advice, the potential for blame, if the software has faults, will be placed at their door 10, 15 years later.

Robo or automated solutions should work, it is all in the ‘math’. Very complicated algorithms drive the customer to a very specific outcome.

This is where it gets complicated: the outcomes delivered to all those postmasters via the Fujitsu Horizon software could hit advisers in a similar way. 

Should the software manufacturer's algorithm that the adviser and their client relied upon prove in five, 10 or 15 years to have had an unforeseen glitch, regulatory retrospective retribution will rain down on the advisory firm and not the software maker of the programme.

There is a simple solution to a complex problem.

That is to have those technology firms providing algorithms certified as fit for the purpose they were designed for by the FCA.

Fit for purpose accreditation already exists in other areas of regulation. 

The outcomes delivered to all those postmasters via the Fujitsu Horizon software could hit advisers in a similar way. 

Aircrafts cannot fly in UK airspace without CAA approval. Drugs are certified as fit for purpose and prescription with the Medicines and Healthcare Products Regulatory Agency.

So why can the FCA not approve automated advice models as fit for purpose? In doing so, the adviser is no longer in the firing line for adopting technology to provide affordable advice to the mass market consumer.

The ‘why not’ answer according to Andrew Mansley – a technical specialist within the FCA’s innovation department, engaging with firms developing innovative business models to explain the requirements of the rules and relevant regulated activities – who I spoke to at some length at the 2017 PFS Festival, was that it would be “anti-competitive”.

What?

The FCA needs to consider the following simple steps to improve the embrace of automated opportunities.

  • All providers of robo models and modelling software should apply to the FCA for approval – that approval will certify what the programme can and cannot do as well as when.
  • The FCA approval will apply to that provider, their algorithms, the programme and its use.
  • Any changes or upgrades would require a certification upgrade.
  • The robo model software provider would require their own professional indemnity cover for any unforeseen failures.
  • The advisory firm would not be responsible for the failure of the programme as part of the FCA sign off.

Hawking warned that AI could develop a will of its own that conflicts with that of humanity. The advice responsibility buck stops with the technology provider.

Put these in place and both the regulator and the software house would think very carefully about failure, and the adviser could engage with more consumers, at lower cost with confidence restored.

Derek Bradley is founder and chief executive of Panacea Adviser