The headlines have been awash with algorithmic decision-making in recent months.
Summer 2020 saw controversy about the allegedly discriminatory algorithm used to determine students’ A Level results, as well as suspension of the use of algorithms by the Home Office and South Wales Police in respect of visa applications and facial recognition technology, respectively.
A landmark legal challenge against Uber, launched in October by the App Drivers and Couriers Union on behalf of several of Uber’s drivers, has added to the debate.
The ADCU alleges that Uber’s use of wholly automated decision-making to fire employees contravenes the General Data Protection Regulation (GDPR) in a number of ways.
The case follows a separate ADCU challenge against the company, commenced in July this year, aimed at uncovering the algorithms used to manage and allocate jobs. Both challenges have been brought in the courts in Amsterdam.
The use of artificial intelligence (AI) to make key decisions is on the rise and is becoming a lot more prevalent in the financial services sector. Algorithms can help streamline decision-making and, due to the vast amount of data they can process, can lead to better-informed decisions.
For example, we are seeing firms increasingly using AI to evaluate loan eligibility using alternative data that does not centre purely on credit scores, thereby opening up credit to individuals for whom it might not otherwise be available.
AI can also be used very effectively to prevent fraud, personalise customer experience through offering individualised financial advice, more accurately assess investment risk and more.
Algorithms and the GDPR
The GDPR is “technology-neutral”; it does not mention algorithms or AI explicitly. But it does contain an explicit prohibition, in Article 22, on making legal or significant decisions about individuals based solely on automated processing, unless:
a) The decision is necessary for entering into or performing a contract;
b) The individual affected by the decision has consented; or
c) A legal authorisation exists.
It also gives individuals a right to receive an explanation of the logic behind automated decisions and to have such decisions reviewed by a human.
All other GDPR principles, such as transparency and compliance with individuals’ rights, apply equally to the processing of personal data using AI as to other types of processing.
The Uber challenges
The two Uber cases allege that Uber’s use of algorithmic decision-making breaches the GDPR in a number of ways. The first challenge, launched in July, relates to Uber’s use of an algorithm to manage and allocate jobs.
Drivers allege that they have not been given sufficient information about the processing of their personal data by this algorithm and have not been provided with access to their personal data that was taken into account in the decision-making.
The more recent challenge, from October, concerns drivers who had their accounts terminated due to alleged fraudulent activity, which was detected using one of Uber’s many AI systems. The ADCU claims that this system automatically deactivated the drivers in question without human involvement in the decision and without the drivers being given any chance to appeal the decision or request human intervention.