Long ReadMar 5 2024

Use of AI is increasing and regulation is following fast

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
Use of AI is increasing and regulation is following fast
The EU AI Act is set to be ratified in the coming months. (AndersonPiza/Envato Elements)

In the past decade we have seen artificial intelligence move from being a fringe technology to a central component of many firms’ operations.

While complex models are now commonly deployed to assist with things like financial crime detection, credit scoring, risk assessment, customer engagement, personalised banking, algorithmic trading and even compliance, AI offers a new frontier even as the management of those models comes under increased scrutiny. 

It is not just in the first line of defence that AI offers firms opportunities for development; when it comes to oversight processes, for instance, AI has the potential to reduce human error and dramatically increase the scale, speed and depth of data that can be analysed, leading to better customer outcomes and tighter adherence to regulatory rules. 

This approach promises to transform traditional compliance methodologies, shifting the focus from manual, reactive processes to proactive, data-driven strategies. However, this shift is not without its challenges.

Firms must carefully manage the risks associated with AI, including over-reliance on the technology, data privacy, algorithmic bias and cybersecurity, ensuring that the deployment of AI adheres to stringent regulatory standards and ethical guidelines. 

As such, the successful integration of AI into operational and oversight functions will depend on a firm's ability to foster a culture of continuous learning and adaptation, ensuring that technological advancements are leveraged in a manner that is both responsible and aligned with the overarching goals of regulatory compliance and consumer protection. 

In simple terms, it is a 'who watches the watcher' scenario. If AI is deployed to aid firms with operational or customer-facing processes, internal decision-making or regulatory compliance, AI itself must also be overseen in a robust way to ensure those rigorous standards are met.

Doing so will require firms to ensure that decision-making throughout the implementation process is documented and signed off.

It may well be that the difference between regulatory intervention and successful ongoing operation comes down to who is in the room as the first decisions are made, and who is accountable for the end-to-end processes being put in place.  

Legislation for regulating the use of AI

The EU AI Act, which is progressing towards finalisation, seeks to address the challenges of regulating the area of AI. It introduces a comprehensive regulatory framework for the use of AI across all sectors, including financial services, setting a precedent for global AI regulation.

For financial firms, this act categorises AI systems based on their risk levels, imposing stringent requirements on high-risk applications, such as those used in credit scoring, fraud detection and compliance. 

Firms will need to ensure their AI applications are transparent, explainable and governed by robust accountability mechanisms. This will necessitate a thorough review and potential redesign of AI systems to comply with enhanced data governance, risk management and documentation standards. 

The UK’s journey toward AI regulation, while currently less advanced than in Europe, is also underway.

New rules on model risk management for banks, set out in the Prudential Regulation Authority's supervisory statement SS1/23, come into effect from April 2024, which tackle some of the issues attached to the lack of explainability and interpretability of complex models.

However, the Financial Conduct Authority's senior manager and conduct regime will already require senior managers to take reasonable steps to ensure accountability and responsibilities for the systems and controls of a firm, very much covering the use of AI and other such models. 

Transparency and accountability

In response to these regulatory challenges, financial services firms will have to enhance their risk practices, including where model risk management is already in place, integrating principles specifically tailored to address the nuances of AI and machine-learning technologies. 

This entails a thorough reassessment of governance frameworks and senior management accountabilities, alongside model identification, development and validation processes to ensure opaque 'black boxes' do not exist within firms’ control environment.

These developments mark a significant shift in regulatory philosophy, moving towards a more holistic understanding of model-based risk that encompasses the unique attributes of AI. 

Success in this landscape will depend on a firm's ability to innovate responsibly.

As firms grapple with these regulatory demands, the focus will increasingly be on establishing comprehensive governance frameworks that not only comply with current regulations but are also adaptable to future advancements in AI, ensuring that compliance stays in lockstep.

As financial firms incorporate AI more deeply into their operations, the path forward will redefine how they manage compliance and interact with customers.

The introduction of the EU AI Act and forthcoming UK regulations presents an opportunity for these firms to not just meet new standards, but to lead in the ethical and transparent use of AI technology.

Success in this landscape will depend on a firm's ability to innovate responsibly, ensuring that advancements in AI enhance customer experiences and risk management without compromising ethical values.

If this can be achieved, the financial services sector can secure a future where technology and compliance work hand in hand to support both industry growth and consumer trust.

Richard Taylor is a director and Mark Turner is managing director of financial services compliance and regulation at Kroll