TechnologyNov 23 2020

The implications of AI and human decision-making

  • Describe the implications of the Uber AI case for financial services
  • Identify the GDPR implications of using AI
  • Explain what automation bias is
  • Describe the implications of the Uber AI case for financial services
  • Identify the GDPR implications of using AI
  • Explain what automation bias is
pfs-logo
cisi-logo
CPD
Approx.30min
pfs-logo
cisi-logo
CPD
Approx.30min
twitter-iconfacebook-iconlinkedin-iconmail-iconprint-icon
Search supported by
pfs-logo
cisi-logo
CPD
Approx.30min
The implications of AI and human decision-making
pexels/Pixabay

Drivers allege that they have not been given sufficient information about the processing of their personal data by this algorithm and have not been provided with access to their personal data that was taken into account in the decision-making. 

The more recent challenge, from October, concerns drivers who had their accounts terminated due to alleged fraudulent activity, which was detected using one of Uber’s many AI systems. The ADCU claims that this system automatically deactivated the drivers in question without human involvement in the decision and without the drivers being given any chance to appeal the decision or request human intervention. 

Although the cases relate to decisions made about employees, the principles extend far more widely and the implications will be important for any firms looking to implement AI technologies to help with decision-making. 

“Solely-automated” decisions and meaningful human intervention

One of the key questions being dealt with is the extent of human oversight that is required for an automated decision not to be considered “solely” automated (and therefore to fall outside of the Article 22 restrictions).

Uber has stated publicly that the decisions to fire the employees were reviewed by a human, but exactly what this human review involved is currently unclear. 

Guidance from both the Information Commissioner’s Office and the European Data Protection Board suggests that in order for an automated decision to fall outside of the restrictions, human input must be meaningful and must involve the human in question reviewing and analysing all relevant data.

Uber may claim that a human reviewed the decisions, but if all that human did was to validate and “rubber-stamp” the decisions made by the algorithm, without reviewing any of the underlying data or taking any steps to question why the decision was made, it is unlikely that this will be deemed sufficient to take the decision outside of the “solely automated” bucket. 

The ICO also cautions against “automation bias”; if a human reviewing a decision relies so heavily on the automation that the human stops using their own judgement, the decision is likely to be caught by the restrictions even if the intention of the human review was to factor in other considerations.

Legal bases for wholly automated decisions

One of the primary allegations in the ADCU’s October challenge is that Uber did not have an appropriate legal basis to make firing decisions on a wholly automated basis. In other words, Uber did not fulfil one of the conditions under Article 22 of the GDPR.

Given that Uber’s publicly-stated view is that the decisions were not fully automated, no public argument has been raised regarding application of the Article 22 conditions.

PAGE 2 OF 5