OpinionMay 12 2023

'It is only a matter of time before regulatory action catches up with AI'

twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
'It is only a matter of time before regulatory action catches up with AI'
(iLexx/Envato Elements)
comment-speech

The increasing presence of artificial intelligence in everyday life has caused ripples across the business and finance sectors, but its deployment has also created a potential new legal and regulatory minefield.

As the rapid rollout of the technology gathers and is increasingly embedded in many different aspects of financial services businesses, the spotlight is now on just what checks and balances (and legal and regulatory safeguards) may be needed. 

At present, there is no over-arching legislation in relation to the use of AI, and that seems likely to remain the case if recent government comment is anything to go by. That is not to say that there is no regulation – far from it.

Given AI is used in a variety of sector-specific ways, it seems sensible that a nuanced, sector-specific approach to the potential risks may be more fit for purpose. 

AI’s inexorable march into the workplace also throws up challenges for those responsible for its actions.

AI has swept through the world of finance, bringing with it significant benefits for firms, customers and markets generally. At the same time, it has the potential to cause significant harm.

AI may be used in customer-profiling, including risk-profiling, and identifying potentially suitable financial products and/or services. This could be a largely positive experience for consumers as it may mean operational savings that can be passed on to the consumer, as well as a more tailored offering to the individual’s needs and risk profile.

However, poorly performing systems, those with inbuilt or inadvertent bias, and/or problems with human oversight, may result in unfair treatment of customers or potentially discriminatory behaviours.

In those circumstances, the use of AI could expose businesses to the same litigation and regulatory intervention as if the potentially discriminatory or unfair differential treatment had been undertaken by an employee. 

AI’s inexorable march into the workplace also throws up challenges for those responsible for its actions in relation to employees.

Last month, the New York City Council passed legislation regulating the use of automated employment decision tools (AEDT) in an attempt to curb AI bias reported by complainants. The law is the world’s first in relation to AI bias and will likely lead to similar action across the globe, given AI’s growing prevalence in everyday workplace decision-making. 

As with any AI use, it is only as robust as the training data used and, if there is any bias, the AI system is likely to be biased.

The law bans the use of AEDT algorithms using protected characteristics – such as age, gender, race and sexuality – when making any decisions relating to employees.

Those employers using the AI are also required to test the software for potential biases and the potential for discrimination against protected classes, after a flood of reports of endemic discrimination arising from AI bias. 

In one study, AI-driven hiring systems were shown to be twice as likely to reject female applicants than male applicants. In another, 84 per cent of executives reported their AI algorithms as only reinforcing rather than avoiding racial or gender bias.

Taking firm action to protect workers’ rights is a welcome move for New York’s workforce, but across the globe there is still a long road ahead.

While the UK’s Equality Act 2010 prevents workplace discrimination per se, AI-specific guidance and legislation may be needed as employers increase their reliance on automation. In the meantime, it may become fertile ground for employment-related claims. 

One of the biggest areas of concern with the use of AI is in relation to data protection and privacy. Last month, Italy’s data regulator temporarily banned the use of ChatGPT over data security concerns, though more recently it has said it will allow its return if developer OpenAI takes “useful steps” to address concerns.

Here in the UK, the government published a white paper at the end of March outlining five principles for the safe and innovative use of AI, which include safety, transparency, fairness, accountability and contestability.

Notably, it states that regulators should consider the need for people to have clear routes to dispute harmful outcomes or decisions generated by AI. 

While there is some way to go for regulation to catch-up with the increasing use of AI, the next year or so is likely to bring more clarity.

The government’s over-arching aim is stated to be the need to avoid “heavy handed legislation which could stifle innovation”. Instead, it will empower existing regulators to prepare an approach tailored to how AI is used in their specific sector.

Over the next 12 months, regulators (including the Financial Conduct Authority) are expected to issue further guidance.

However, given the financial services sector is one of the most heavily regulated sectors, it may be a case of fine-tuning the existing regulatory framework (as the FCA has previously suggested) rather than anything more game-changing.

Businesses are, by and large, very sensitive to the need to comply with the Data Protection Act 2018 and GDPR. An AI solution will likely involve the processing of large amounts of data, which may include personal data.

Every business using AI will need to ensure that their AI tool collects and uses that personal data in a way which is compliant with data protection legislation.

One of the biggest areas of concern with the use of AI is in relation to data protection and privacy.

If it does not, it may face enforcement action, with the threat of eye-watering fines of up to €20mn (£17mn) or 4 per cent of global turnover, as well as potential claims from the affected data subjects, such as customers or employees.

The use of AI, on the other hand, may enable or enhance GDPR compliance. A business must have “appropriate technical and organisational measures” in place to keep data secure.

AI can, for instance, be used as an effective cybersecurity tool, used to predict potential cyber threats and/or to identify potential vulnerabilities in a system.

Of course, as with any AI use, it is only as robust as the training data used and, if there is any bias, the AI system is likely to be biased, which may have serious security implications and may fall foul of the appropriate technical measures requirement.  

While there is doubtless some way to go for regulation to catch-up with the increasing use of AI, the next year or so is likely to bring more clarity.

It is also only a matter of time before claims relating to businesses’ use of AI start to filter through the courts and the first test cases will be keenly studied by legal experts as the profession grapples with the brave new world AI has ushered in.

Abigail Healey is a consultant at Quillon Law