Long ReadMar 12 2024

AI a double-edged sword when fighting fraud

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
AI a double-edged sword when fighting fraud
While GenAI can be used to perpetrate malicious activity, it can also be used to help mitigate cyber threats, including recognising and thwarting deepfakes. (mstandret/Envato Elements)

Every year, billions of pounds are lost as a result of fraud.

Artificial intelligence and machine learning models have long been used in fraud management for identifying patterns, irregularities and suspicious language, as well as augmenting and making heuristic rules-based scoring faster and more accurate.

Generative AI boosts these efforts and can help improve data ingestion, risk scoring policy management and accuracy, and investigation.

More specifically, GenAI-based co-pilots can come up with model improvements, make recommendations on how to investigate fraud management alerts and cases, provide insights into reports, and help interpret and write reports on suspicious activities. 

However, GenAI is not without risks. Malicious actors are able to use its image, text, and voice generation capabilities to falsify documents, impersonate customers, and take over accounts on a large scale, without the need for manual labour. 

Deepfakes present new fraud management obstacles

One of the biggest risks includes fraudsters having the ability to use GenAI to create deepfakes and simulated legitimate-looking online sessions.

For example, GenAI technology can be used to generate images and imitate voice over the phone to enrol in services, such as opening bank accounts or signing up for an insurance policy.

Furthermore, GenAI can be used to authenticate biometrics-protected websites and mobile applications, such as call centre voice biometrics or online facial recognition in mobile applications or in web browsers. 

GenAI has a lot of potential, but enterprises needing to use it for fraud management and broader security purposes have to approach it with caution.

Using deepfake technology to clone and impersonate an individual will undoubtedly lead to fraudulent financial transactions that will impact not only individuals but also enterprises on a wider scale.

Indeed, we are seeing more and more examples of malicious actors targeting larger companies by using GenAI to impersonate a senior executive and authorise activities, including the sharing of sensitive data or wire transfers to criminals.

Challenges of risk scoring and deepfake identification

While GenAI can be used to perpetrate malicious activity, it can also be used to help mitigate cyber threats, including recognising and thwarting deepfakes.

However, organisations that use defensive GenAI for risk scoring and deepfake identification are also exposed to many challenges and issues. 

First, there is the risk of leaking intellectual property and encountering copyright violations. For example, data scientists pasting sensitive corporate data into GenAI tools that do not have adequate security measures in place.

For individuals, this can also lead to privacy violations, as there is a chance personally identifiable information (PII) could be filtered through or leaked into GenAI tools.

Second, it can also be a challenge to explain decisions GenAI helps make, resulting in it becoming almost indefensible to regulators, customers and pundits should the need arise.

Third, ensuring consistency and repeatability of GenAI outputs and decisions is also challenging. As with all AI tools – particularly in their infancy – having multiple subsequent queries producing the same results is not always guaranteed.

As a result, version control of GenAI models, where mathematical proofs are used to ensure that the next production GenAI model is better than the previous one, is becoming an increasing priority.

Fourth, hallucinations of GenAI models are also a problem. This boils down to the GenAI model providing fictitious and/or factually incorrect results, potentially because of misunderstanding the input or because of improper tuning.

In addition to the misleading information, it can result in the GenAI model learning incorrect patterns and producing false or misleading outputs. 

Fifth, GenAI models are particularly vulnerable to malicious data injection, which can have significant and long-term consequences – and has the potential to eliminate any good that can come out of using the tool.

If training data is intentionally tampered with or the model is injected with malicious data, it can cause the GenAI model to permanently generate incorrect, false or highly offensive responses. 

Lastly, governance of GenAI models, for example testing models for prejudice and bias, is crucial, but it can be a challenge to provide processes and technical controls to manage versions of GenAI models in an agile but version-tracked manner.

Tackling GenAI fraud management

For enterprises, including financial institutions, to protect against the above risks it is imperative that they equip existing security and data architectures to handle GenAI functions. 

First, in order to protect against GenAI fraud, it is crucial that enterprises have monitoring and policies in place that reduce the chance of inadvertently sharing the company’s intellectual property, as well as personally identifiable information with GenAI.

Being aware of, and staying appraised of, external factors that could dictate, affect or infiltrate the training data is key.

We are seeing more and more examples of malicious actors targeting larger companies by using GenAI.

Second, the enterprise should obtain from the vendor evidence of GenAI model explainability when defending the output of GenAI. Explainability includes requesting that the vendor provide reason codes to support GenAI’s decisions.

Lastly, Forrester recommends implementing GenAI as part of a larger tool set. Indeed, using GenAI as only one of the tools available for fraud management and anti-money-laundering policy authoring will significantly reduce the opportunity for malicious activity.

In summary

It is important to be mindful of third-party risk management when implementing GenAI.

All too often we see GenAI users focus too heavily on how the technology will boost productivity but fail to recognise or consider the threat to security and regulatory compliance.

For example, we see increasing numbers of employees feeding sensitive data into GenAI models such as ChatGPT, which could jeopardise the integrity of the company’s security.

Businesses looking to partner with or utilise third-party GenAI applications must implement thorough, disseminated security protocols before employees start using it. 

Ultimately, GenAI has a lot of potential, but enterprises needing to use it for fraud management and broader security purposes have to approach it with caution.

As GenAI models become more ubiquitous, businesses must understand the impact it will have and identify potential security challenges to ensure that GenAI has a positive net impact on fraud management.

Andras Cser is vice-president and principal analyst at research and advisory company Forrester