Long ReadMay 9 2023

How to ensure your company is ready to address AI risk

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
How to ensure your company is ready to address AI risk
Regulated financial services businesses and their data and technology suppliers must take steps to address artificial intelligence risks. (Photo: DC_Studio/Envato)

Artificial intelligence has the potential to revolutionise user experience, business decision-making and almost all aspects of technology use across the financial services sector. As ChatGPT, Bard and other user interfaces have popularly demonstrated, AI’s usefulness is no longer limited to unseen backend technology. 

However, to unlock its potential, regulated financial services businesses and their data and technology suppliers must take steps to address AI risk effectively.

There is a growing need for regulated businesses to ensure that they have the right people, governance structures and controls in place to effectively address AI risk

Though the legal and regulatory framework for AI remains underdeveloped, appropriate risk management measures can be put in place now by observing current best practices and ensuring that effective contractual protections are agreed when relying on any third parties in the AI supply chain.

The legal and regulatory framework

In the UK there is a range of laws that will apply to the development and use of AI by regulated financial businesses. None, however, have been designed specifically to deal with AI risk.

Existing laws include those that deal with data protection, information security and data breaches and that prohibit the use of third-party data and intellectual property without permission. These laws include criminal sanctions against unlawfully accessing data on third-party systems.

At a high level, all financial services businesses also must treat their customers fairly when using or relying on AI, act in the customer’s best interests, respect the consumer duty and ensure that any AI in use does not undermine operational resilience.

There are also existing laws that prevent discriminatory outcomes, and those designed to protect against unfair business practices, which need to be examined in the context of AI.

The UK government’s plans indicate that the existing legal and regulatory framework will soon undergo significant change. It recognises that there are gaps that need to be filled.

The approach taken by the UK government is different to that of other governments and regional bodies such as the EU. While the EU has gone down the path of drafting a law to govern the technology itself, the UK is focusing on regulating outcomes.

Rather than implement a single AI act, the UK government has firmly signalled an intention to take a principle-based approach and focus on restricting and preventing the potential adverse impacts of AI.

It views this approach as pro-innovation, and the idea is to regulate poor experiences (and unsafe ones) for people rather than place arbitrary restrictions on the development of technology. 

Regulatory rulemaking and guidance are expected to be central to this approach.

The Bank of England and Financial Conduct Authority have already taken significant steps to outline a framework that will require regulated financial businesses to demonstrate that they have robust governance, data and model risk management measures in place.  

Some current issues to consider – data sourcing, accuracy and fairness

While the law remains in a state of change, there is no shortage of best practice approaches and international standards for financial services businesses to review and consider implementing.

These approaches, while non-mandatory, set out a lot of the detail that can support AI risk management in a way that is consistent with the UK’s intended overarching AI principles.

As a starting point, AI impact assessment processes should be put in place and invoked at the very earliest stage in which the use of AI is contemplated.

Before any data is collected, consideration should be given to the impact the use case may have on the business, its clients, interconnections across the financial system, societal impacts and the business’s sustainability agenda.

Having an AI code of conduct, AI-specific policy and associated risk templates can support identification of circumstances where AI may be used and the path forward for using it.

Once this process is in place, the core legal and regulatory issues can more readily be uncovered and addressed. This happens whether the use case is embedding a large language model (LLM), developing an anti-money laundering solution or considering tools for better financial advice or investment decision-making.

Sourcing data to train AI models is currently one key legal issue that is now a hot topic.

Whether data is collected from an internal or external source, legal restrictions can prevent the business from further using it to train an AI model.

At the simplest level, the business needs to understand where its data has come from and the legal rights that attach to the data.

Those rights may relate to intellectual property licensing arrangements, commercial confidentiality restrictions or regulatory requirements that prevent secondary uses.

The accuracy of an output of AI is an equally significant issue. The level of excitement around the predictive capabilities that LLMs have displayed has been matched by the level of despair over their obvious failings and "hallucinations".  

Technical measures can be put in place and testing and monitoring carried out to ensure AI outputs are as accurate as is necessary for the purposes for which they are used.

Accuracy measures may include technical reviews of results for false positive and negative rates, and external measures of validity of the results.

It is not enough that the results are accurate under training or test conditions, they also need to be accurate in the real-world contexts in which they are used.

Risk management steps may need to be taken to balance technical accuracy of results with transparency, privacy protection and broader fairness objectives.

Some highly accurate AI systems are said to have low levels of interpretability, which can have an impact on fairness when relied on to make decisions. If it is not apparent how a system has made a decision it is difficult to justify the decision as a fair one.

These are just a few of the many interrelated issues which regulated businesses need to consider carefully as they move forward with AI technology.

As discussions continue around the potential for unfair bias, discriminatory outcomes and inaccurate results in the context of the use of AI for financial services, there is a growing need for regulated businesses to ensure that they have the right people, governance structures and controls in place to effectively address AI risk.

Luke Scanlon is head of fintech propositions and legal director at Pinsent Masons