TechnologyOct 16 2023

Advisers warn about firms entering client information into ChatGPT

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
Advisers warn about firms entering client information into ChatGPT
(Tara Winstead/Pexels)

Advisers have issued a warning following reports of some firms putting client information into ChatGPT to write suitability reports.

One such adviser was Capital Asset Management CEO, Alan Smith, who posted on X, formerly known as Twitter: “Heard on good authority that some advisers have been putting full client financial data into ChatGPT to write suitability reports”.

The same phenomenon was reported by Model Office founder, Chris Davies, who said: “I’ve heard there are a small number of advice firms who are using ChatGPT to try and deliver some form of suitability report using client financial data.”

Risks

Davies told FTAdviser that the risks of doing this are "astronomical" with the main reason being that ChatGPT is "totally new software".

One such issue Davies identified was IP infringement.

“You can put information on ChatGPT which may infringe another person’s or other firms or businesses' IP”, he explained.

This is because the AI system generates outputs that are reproductions of existing IP-protected works. 

He also warned there are issues regarding data protection, explaining it is a “client consent issue”. 

“Are these clients being engaged on this so that these firms can feel that they have the authority to put the financial data into ChatGPT?”

While these were the “two big issues” mentioned by Davies, he also identified other problems with the practice, such as the possible malfunctioning of the technology.

He referred to this as “hallucination” which could cause technology not to act in the expected way.

As an example, he pointed to Google Bard which, when it was launched, incorrectly cited the wrong telescope for taking a picture of the closest planet outside our solar system.

While he acknowledged that was an “extreme example” he added that it shows such technology “is not reliable by any means at this moment in time”.

Reasons why

Speaking on why advisers might engage in this practice, Davies stated: “There’s an element of laziness. 

“The issue with AI is that it can make humans lazy because it automates a lot of the stuff that we’d normally have to do and we think it will do the work for us.”

He also suggested that such advisers are “not doing their homework”, pointing to the variety of tools in the marketplace that can provide a similar function in a very safe and secure way. 

“There are tools that can write a suitability report”, he said.

The bright side

However, while there were many issues with the aforementioned use of ChatGPT, Davies made it clear that the use of chatbots in financial advice is not inherently bad.

“You can use the chat bot in a way as long as you do it correctly.

“Like anything, if you’re gonna outsource and use technology, you need to make sure that your governance and research are due diligent.”

He added that any such technology needs to be highly trained around the rules and regulations as these are dynamic, have different variables, and are principal based.

Davies additionally pointed to examples of where ChatGPT has been used well in the marketplace, such as being used to increase client engagement support.

He also pointed out “you can look at data points and ask the chatbot to pull out an excel spreadsheet across these demographics of clients and this age group”.

However, he cautioned that it has to be highly trained, and highly cyber secure.

This was an ideal shared by Smith who stated that chatbots could be used well by the industry if they were used “in a walled garden”.

tom.dunstan@ft.com

What's your view?

Have your say in the comments section below or email us: ftadviser.newsdesk@ft.com