OpinionApr 29 2024

'AI could become a staple in the adviser's toolkit'

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
'AI could become a staple in the adviser's toolkit'
(DC_Studio/Envato Elements)
comment-speech

It's becoming more common to hear about artificial intelligence shaking up traditional and technology sectors, creating efficiencies and productivity, and financial advice is no exception.

What often slips under the radar are the everyday, less glamorous repetitive tasks that keep advisers buried in paperwork rather than engaging with their clients. 

If we think that at the heart of financial advising, tasks such as KYC (know your client) fact-finding, suitability report generation, and routine client communications are crucial yet often thanklessly repetitive, leading many to view them as the necessary humdrum, boring stuff.

The irony lies in the fact that these tasks, while mundane, form the backbone of client service quality and compliance – two pillars upon which the reputation of a financial adviser and their firm stands.

These are the bits that don't get talked about at the lunches or the away conferences.

Hence, the question on many advisers' minds and the industry as a whole is: how can advisers navigate these tasks efficiently without compromising on service quality or falling afoul of regulatory requirements?

Freeing up the humdrum

This is precisely where AI and especially generative AI can hold the potential to make a significant impact.

Now, imagine as an adviser freeing up hours every week because an AI system is smartly navigating through files, drawing insights, writing reports and even answering basic client queries.

Prior to November 2022 and the release of OpenAI's first version of ChatGPT, this was seen as the distant future. However, since the GenAI phenomenon and the flurry of releases from companies like Microsoft, Google, Meta, Anthropic and Mistral, to name just a few, this imaginary view is now very much within reach.

The advent of GenAI has made it possible that AI systems can generate comprehensive reports, distil complex market data into actionable insights, and provide channels for client engagement.

Regulatory frameworks are not keeping up with the rapid improvements in AI technology.

This is where the conundrum for most financial advisers and wealth managers lies, especially when trying to introduce AI systems or AI tools – most of which have not been specifically built for the financial industry. 

When you picture a financial adviser's day, you might imagine nice lunches, high-stakes decision-making, complex strategy planning, and in-depth client consultations. However, a significant portion of their day often gets bogged down with tasks that are anything but glamorous.

If we are of the view that the beauty of AI lies in its ability to handle routine and often repetitive tasks with ease, then by nature implementing such AI systems would potentially allow advisers to focus on what they do best: offering bespoke advice and building deeper relationships with their clients.

Who's to blame if it all goes wrong?

However, like with many things regulatory and in the financial services industry, we need to add a caveat: 'It's not as simple as that'.

Introducing AI into the mix brings its own set of challenges, particularly around regulatory compliance and the thorny issue of liability, especially when technology-driven decisions go awry.

The pressing question becomes: who bears the responsibility when an automated system falters – the adviser, the technology provider, or the firm itself? 

Liability is a major worry that keeps coming up, especially in situations when AI-driven systems are inadequate, hold biases and may be used for training on client data.

It is important and urgent to address the issue of who is to blame in situations when automated AI systems produce less than ideal results: the adviser or the technology.

It is difficult to determine who is responsible if an AI system that was trained on algorithms and historical data gives a customer advice that causes them to lose money.

Who made the decision to use AI in their practice – the adviser or the AI's developers, who might not have developed their system for use in finance or the financial industry and who may not have taken certain precautions into account?

As we progress through this new era of advancement we might find that AI becomes a staple in the financial adviser's toolkit.

These are not just theoretical but rather genuine worries that financial firms and advisers are currently deliberating.

Currently, regulatory frameworks are not keeping up with the rapid improvements in AI technology, creating an ambiguous vacuum. 

The financial industry is regulated by laws that were written way before the explosion of AI models and so are not suited to deal with some of the complexities of AI systems that are used as part of delivering financial advice. 

If we consider that when dealing in the financial industry, any new technology must navigate through an array of rules and guidelines and ultimately ensure they place consumer protection at heart.

Consumer duty considerations

This emphasis on consumer protection can be seen with the Financial Conduct Authority's consumer duty rules, which set higher and clearer standards of consumer protection across the financial services industry, requiring firms to prioritise their customers' needs. 

The duty applies to all firms with a key role in delivering retail customer outcomes, placing the responsibility on AI would not absolve a firm of their duty. The catchphrase 'computer says no' comes to mind.

This once popularised catchphrase in the British sketch television programme Little Britain, highlights that attempts to shift responsibility of providing diligent and informed advice by stating 'it's the AI's fault' may not be sufficient. 

From a regulatory perspective, the onus is on advisory firms to ensure the accuracy and appropriateness of advice given, regardless of whether it originated from a human mind or an AI algorithm.

This regulatory stance places a significant burden on advisers and firms, not only to choose their technological tools wisely but also to continuously monitor and validate the outputs of these systems, especially if used as part of providing or delivering financial advice referred to as the distribution chain.

Yet despite these hurdles, the potential benefits for efficiency, client satisfaction, and ultimately the potential to improve client outcomes and, from a financial perspective, the bottom line, make the pursuit of AI integration an area of interest for many in the industry.

We've already seen firms such as Morningstar, Morgan Stanley, Schroders Personal Wealth, Natwest, JPMorgan and investment powerhouses like Blackrock integrate GenAI in both back-end services and client-facing engagement.

The key will be in finding the balance, by blending the strengths of both AI and human insight.

These integrations are impacting the sector in areas such as KYC, where AI tools are able to extract relevant information from various sources like Zoom, Teams, and Google Meet calls, to emails and documents to populate fact finds.

They are helping to streamline the creation of suitability letters and reports by writing compliant reports swiftly while also matching the advice given to a client's financial goals and objectives through the extracted KYC information.

We are also seeing the emergence of personal financial AI co-pilots and AI agents that are able to answer routine queries, give financial guidance and provide market insights, freeing up adviser resources.

These firms, among others, are not only embracing technology and staying competitive, but are also setting new standards in client service and operational efficiency.

As we progress through this new era of advancement we might find that AI becomes a staple in the financial adviser's toolkit, akin to the calculator or cash flow modelling software.

The key will be in finding the balance, by blending the strengths of both AI and human insight.

Financial advisers and firms can achieve a balance that enhances efficiency, accuracy, and client satisfaction, while still adhering to regulatory standards and ensuring the protection of consumers. 

We can move into a world where AI can handle the humdrum, while ensuring that the human element remains at the forefront. This can be achieved by being proactive with our adoption, selecting tools with regulatory engagement, informing clients when AI is involved or used, and taking an approach of continuing professional development.

As Warren Buffet once famously said: “Predicting rain doesn't count, building an ark does." The landscape is continually changing, and so as an industry we need to change and adapt with it by embracing and being proactive.

Elemi Atigolo is a former SJP partner and co-founder of Inatigo, a GenAI business platform