Long ReadMay 9 2023

How to ensure your company is ready to address AI risk

Search supported by
How to ensure your company is ready to address AI risk
Regulated financial services businesses and their data and technology suppliers must take steps to address artificial intelligence risks. (Photo: DC_Studio/Envato)

Artificial intelligence has the potential to revolutionise user experience, business decision-making and almost all aspects of technology use across the financial services sector. As ChatGPT, Bard and other user interfaces have popularly demonstrated, AI’s usefulness is no longer limited to unseen backend technology. 

However, to unlock its potential, regulated financial services businesses and their data and technology suppliers must take steps to address AI risk effectively.

There is a growing need for regulated businesses to ensure that they have the right people, governance structures and controls in place to effectively address AI risk

Though the legal and regulatory framework for AI remains underdeveloped, appropriate risk management measures can be put in place now by observing current best practices and ensuring that effective contractual protections are agreed when relying on any third parties in the AI supply chain.

The legal and regulatory framework

In the UK there is a range of laws that will apply to the development and use of AI by regulated financial businesses. None, however, have been designed specifically to deal with AI risk.

Existing laws include those that deal with data protection, information security and data breaches and that prohibit the use of third-party data and intellectual property without permission. These laws include criminal sanctions against unlawfully accessing data on third-party systems.

At a high level, all financial services businesses also must treat their customers fairly when using or relying on AI, act in the customer’s best interests, respect the consumer duty and ensure that any AI in use does not undermine operational resilience.

There are also existing laws that prevent discriminatory outcomes, and those designed to protect against unfair business practices, which need to be examined in the context of AI.

The UK government’s plans indicate that the existing legal and regulatory framework will soon undergo significant change. It recognises that there are gaps that need to be filled.

The approach taken by the UK government is different to that of other governments and regional bodies such as the EU. While the EU has gone down the path of drafting a law to govern the technology itself, the UK is focusing on regulating outcomes.

Rather than implement a single AI act, the UK government has firmly signalled an intention to take a principle-based approach and focus on restricting and preventing the potential adverse impacts of AI.

It views this approach as pro-innovation, and the idea is to regulate poor experiences (and unsafe ones) for people rather than place arbitrary restrictions on the development of technology. 

Regulatory rulemaking and guidance are expected to be central to this approach.