Long ReadJul 25 2023

Leveraging AI to enhance anti-money laundering efforts

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
Leveraging AI to enhance anti-money laundering efforts
It is crucial that AI is used ethically and is supported by human judgment and decision-making (Elnur/Dreamstime)

A new era of innovation is under way, and artificial intelligence signals a step change in our ability to address money laundering.

Fraud poses significant threats and challenges to financial institutions worldwide, and AI is opening new ways for organisations to manage risk and address financial crime. 

In a risk landscape where financial crime is continuously evolving, harnessing the vast amount of data needed to effectively identify and mitigate risk can be a challenge.

However, compliance teams are beginning to control data, enhance risk assessment processes, and achieve real-time monitoring capabilities through AI. This supports timely intervention to mitigate risk, report suspicious activity, and prevent illicit activities.

Machine learning algorithms and pattern recognition enable AI to quickly analyse large quantities of data and identify patterns that may indicate suspicious activity, which can then be flagged to a compliance professional.

Specifically, it can be used to increase compliance efficiency in anti-money laundering activities, including customer screening against adverse media, sanctions, watchlists and politically exposed persons. 

The AI algorithms can help answer questions about intricate beneficial ownership networks, see patterns of behaviour that might indicate suspicious activity, reveal information about designated persons or entities, and create transparency around shell companies that criminals use to disguise their activities.

Using AI to extract meaning from risk-relevant data and automate the process of analysis saves time and reveals patterns and connections that may be difficult for humans to otherwise see.

However, AI-led AML solutions also present challenges in terms of data availability and ethics.

Transparency, explainability and governance in AI

While AI offers unique advantages in risk management to combat financial crime, it is crucial that it is used ethically and that it is supported by human judgment and decision-making.

The AI technology will not face a fine or go to prison if something goes amiss and there are serious compliance failings.

So transparency, explainability, and governance are needed to address concerns related to fairness and the unbiased application of AI.

AI-powered compliance that brings people into the process when judgment and decision-making are needed can help maintain ethical standards and public trust in the financial industry.

Machine learning algorithms and pattern recognition enable AI to quickly analyse large quantities of data and identify patterns that may indicate suspicious activity

It promotes the responsible use of AI to drive greater efficiency and safeguard the integrity of financial systems.

Over-reliance on AI models without sufficient human supervision would be problematic. Organisations need to strive to strike a balance between compliance efficiency and human expertise, recognising that humans and machines can learn from each other.

AI is best used to enhance human expertise in compliance, and innovative technology can be leveraged to augment and support human analysis, freeing people to do critical tasks related to judgment and decision-making.

Humans exercise ethical reasoning and contextual understanding, which are crucial in complex AML compliance scenarios. 

Leveraging AI for its powers in data assimilation and processing can also move time-consuming and repetitive tasks away from compliance professionals, so they can focus on more high-value activities, such as investigating nuanced cases, conducting enhanced risk assessments, and making critical decisions based on AI-generated analysis.

Meeting regulatory expectations 

Financial institutions are beginning to adopt AI more widely to enhance efficiency in AML compliance.

While regulation around the use of AI in AML compliance continues to evolve, regulators generally support companies experimenting with AI to strengthen AML processes, as it offers the potential to reveal risk, reduce false positives, and prioritise instances of suspicious activities among the large volume of alerts generated in AML processes.

Specific regulations regarding the use of AI in compliance are yet to be established. However, in the US, regulators have provided guidance that encourages institutions to explore its potential.

This guidance emphasises that experimentation with AI does not automatically trigger increased regulatory scrutiny, even if it reveals areas for improvement.

The Financial Action Task Force also recognised the value of AI and new AML technologies in accurately analysing data in real time.

Transparency and explainability are key to ensuring regulators have visibility into the workings of AI models and decision-making processes.

Institutions will naturally want to understand how their AI makes decisions — how it is trained, how potential risks can be identified and addressed, and how bias is mitigated. Failure to execute this balance properly and demonstrate control could result in fines and reputational damage. 

Overcoming data limitations 

One of the key challenges faced when implementing AI into AML processes is the availability and quality of data.

While AI algorithms excel at analysing vast amounts of data, the effectiveness of algorithms relies on the quality and diversity of the data on which they are trained.

Insufficient or incomplete data can lead to biased and inaccurate results, compromising the efficacy of AI-led AML initiatives.

While AI algorithms excel at analysing vast amounts of data, the effectiveness of algorithms relies on the quality and diversity of the data on which they are trained

Financial institutions can overcome these limitations by collaborating and taking proactive steps to enhance the quality and quantity of data available for AI analysis.

Collaboration with external data providers, industry networks and regulatory bodies can also enrich the AI data ecosystem. This diverse dataset could be used to improve the accuracy of AI models and enable more effective detection of risk and suspicious activity.

Another strategy coming to light is the cleansing and standardisation of data before training AI models. This process involves identifying and rectifying any data inconsistencies, errors or duplications to ensure data is accurate, complete and consistent.

This approach can help bridge data gaps and augment the training of AI models, enabling them to detect patterns that indicate risk more effectively.

If financial institutions can establish a feedback loop to improve AI models continuously — monitoring performance, analysing outcomes and incorporating feedback from human experts — they can optimise the accuracy and effectiveness of their AML efforts.

Embracing AI in AML

In the AML compliance space, AI has the potential to enhance and even transform capabilities.

Institutions can embrace AI to access and process risk-relevant data, detect threats and manage complexity, while ensuring fairness and unbiased decision-making. 

To maximise its effectiveness, AI relies heavily on the quality of the data it uses and a symbiotic relationship between technology and human expertise.

It is this partnership, between compliance and business professionals and AI, that will really enable the financial industry to meet the growing challenge of financial crime.

Keith Berry is the general manager of know your customer solutions at Moody’s Analytics