Regulation  

FCA to use data to predict fraud

FCA to use data to predict fraud

The financial regulator is upping its use of technology to catch out fraudsters, and has said data could help it predict which firms pose a risk.

In a speech this week (November 19) to Chatham House, Rob Gruppetta, head of the financial crime department at the Financial Conduct Authority (FCA), outlined the regulator’s view of artificial intelligence and machine learning, and how it could be used by the agency.

He said: "The rise of machine learning is largely driven by the availability of ever larger datasets and benchmarks, cheaper and faster hardware, and advances in algorithms and their user-friendly interfaces being made available online."

Mr Gruppetta said using algorithms and machine learning to hunt out anomalies that could lead to discovering crime was an increasingly important part of the FCA’s toolkit.

He said: "Crimes like money laundering – a secret activity that is designed to convert illicit funds into seemingly legitimate gains – is particularly hard to measure."

Mr Gruppetta said deploying machine learning algorithms over data gave the FCA the ability to detect things that were previously impractical, like suspicious activity across different markets and venues.

"In this way, we’re squeezing the space that criminals can operate in," he said.

He added: "We are moving away from a rule-based, prescriptive world to a more data-driven, predictive place where we are using data to help us objectively assess the inherent financial crime risk posed by firms."

Advisers and other financial sectors have come under pressure to provide the regulator with more data to enable better oversight of practices and potential money laundering activities.

But Mr Gruppetta said making predictions based solely on machine learning algorithms could be misleading, "so we take great care to ensure that these are overlaid with appropriate financial crime and sector expertise".

He said although the agency was one of the first to pioneer a technology sandbox – where start-ups and established players can try out new ideas in the field – it used a healthy degree of scepticism around the use of its new tools.

He said: "We only ever use them as the first step in a rigorous, multi-layered risk assessment process to help us target the riskiest firms. Simply put, the algorithms improve, rather than replace, supervisory judgment."

The regulator has also used data and technology to create forward-looking models to catch out criminals.

Mr Gruppetta said: "Consider building a risk model using algorithms: using a set of risk factors and outcomes, we could come up with a kind of mathematical caricature of how the outcomes might have been generated, so we can make future predictions about them in a systematic way."

He concluded that combining human and artificial intelligence enabled it to operate a system of "supervised supervision".