Facial Recognition – should we be concerned?

Facial recognition is clever technology. From selfies to surveillance its use is becoming far more widespread.  Should we be worried? Are the right checks and balances being put in place?

The ICO has just announced a joint investigation with the Australian Information Commissioner into the activities of a company called Clearview.  It’s facial recognition app allows users to upload a photo of someone and match it to photos collected from the internet. It’s reported the app use a database of more than 3 billion images, that have been scraped from various social media platforms and other websites.

Facial recognition technologies are also being trialled by several UK police forces and used in other countries for police and security purposes, such as at airports, other major transport hubs and at large events. For example, the Danish supervisory authority has approved its use to identify football fans who’ve been banned from stadiums.

In some areas it’s definitely being seen as a step too far. Both the French and Swedish supervisory authorities have declared its use when trialled in schools as highly intrusive and out of kilter with data protection law.

So are the right checks and balances in place to protect people whose images are captured?

What’s clear is the accuracy of the algorithms behind this technology are being questioned, as are the privacy implications. Where facial recognition is to be used, there needs to be a strong justification for its use and robust safeguards in place to protect people.

What is facial recognition?

Facial Recognition (FR) is a broad term used to describe technologies that identify, catalogue and track human faces. The technology can be used in many ways but probably the biggest topic of debate relates to the use of facial images captured via CCTV or photo which are processed via biometric identifiers.

These identifiers typically include the unique ratios between an individual’s facial features, such as their eyes, nose and mouth. These are matched to an existing database of images and biometric data to identify and track specific individuals.

The risk of false positives

The use of facial recognition in real-time, known as Live Facial Recognition (LFR), raises significant privacy and human rights concerns, such as when it is used to match faces to a database for policing and security purposes.

study of facial recognition technology in the US by National Institute of Standards and Technology (NIST) discovered that systems were far worse at identifying people of colour than white people. Whilst results were dependent on the algorithms used, NIST found that some facial-recognition software produced far higher rates of false positives for black and Asian people than whites, by a factor of 10 to 100 times.

NIST also found the algorithms were worse at identifying women than men. Clearly there are huge concerns to be addressed, brought into sharp focus now with the Black Lives Matter movement.

Interestingly, there was no such dramatic difference in false positives in one-to-one matching between Asian and white faces for algorithms developed in Asia.

How did we get here?

Early in 2018, Amazon began selling a facial recognition AI product to US police departments. “Amazon Rekognition” attracted the condemnation of human rights groups and AI experts, who criticised the product’s high error rate and propensity for mistaking black US Congresspeople for known criminals.

In July 2018, the American Civil Liberties Union (ACLU) found 28 false matches between US Congress members and pictures of people arrested for a crime.

A later article in The Independent in May 2020 revealed that “Rekognition” had incorrectly matched more than 100 photos of politicians in the UK and US to criminals who had previously been arrested by police.

In May 2019 San Francisco became the first US city to ban the use of facial recognition by transport and law enforcement agencies, recognising the threats to civil liberties. Very recently Boston has followed suit.

However other US cities and other countries around the world have been trialling similar technology. Notably police and security forces are testing live facial recognition systems as a way of identifying criminals and terrorists at major transport hubs, such as airports and stations and at events and other public gatherings.

In June 2020, following ongoing criticism, Amazon announced it would stop supplying police with the technology for a year. Microsoft has said it will stop selling facial recognition technology to police departments until there is more regulation in place. IBM also said it will no longer offer its facial recognition software for “mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values”.

Some have speculated other reasons are at play for the apparent change of heart. If people are going to be wearing face masks for the foreseeable future, facial recognition models could be rendered temporarily useless!

UK policing and security applications

In the UK, the Metropolitan Police, Manchester, Leicester and South Wales Police forces have been trialling LFR over recent months. The Met announced they are using LFR to find suspects on watchlists for serious and violent crime and also to help find children and vulnerable people.

In July 2019, a study by Peter Fossey of University of Essex, mentioned in New Statesman found that matches in the London Metropolitan police trials were wrong 80% of the time, potentially leading to serious miscarriages of justice.

At the time of writing this article, a High Court ruling that the South Wales Police’s use of facial recognition technology is lawful is being challenged in the Court of Appeal.

Mr Ed Bridges from Cardiff has launched a legal challenge to South Wales Police’s use of facial recognition after his photo was taken while he was out shopping.  Backed by human rights group Liberty, they are arguing that the technology breaches human rights laws and is discriminatory. Mr Bridges commented:

“This technology is an intrusive and discriminatory mass surveillance tool and I’m optimistic that the court will agree that it clearly threatens our rights.”

Hang on… how are the reference databases of known facial images compiled?

Can you check if your face is on the database? How can you exercise your right to object? All valid questions which we’ve struggled to find answers to. The findings of the joint UK-Australian investigation into Clearview will be interesting. Clearview have provided FR services to law enforcement agencies around the world, including the FBI and the US Department of Homeland Security.

Privacy concerns over the use of biometric data

Any facial recognition technology capable of uniquely identifying an individual is likely to be processing biometric data (i.e. data which relates to the physical, physiological or behavioural characteristics of a person).

Biometric data falls under the definition of ‘special category’ data and is subject to strict rules. Therefore to compliantly process it in the European Union a lawful basis must be identified AND a condition must also be found in GDPR Article 9 to justify the processing. In the absence of explicit consent from the individual however, which is not practical in most FR applications, this may be tricky to achieve.

The EDPB guidelines

The European Data Protection Board (EDPB) issued draft guidelines (3/2019) for public consultation on processing of personal data through video devices. The draft guidelines specifically address the use of facial recognition technology. The following threats to privacy were identified:

  • Lack of transparency – an intrusion into the private lives of members of the public who had not consented to or were aware of the collection or the purposes for which they were collected/stored.
  • Misuse – images retrieved may be used for purposes other than that those notified or consented.
  • Accuracy – inherent technological bias within the technology may result in false positive matches or discrimination.
  • Automated decision-making – decisions which may significantly affect individuals may be based solely on the facial recognition software.

So where does all this leave us?

Its clear LFR represents a step change from the previous generation of CCTV. Police forces and other organisations using this technology need to properly assess their compliance with data protection law and guidance.

This includes how police watchlists are compiled, which images are used and for what purpose, what confidence level for recognition of an individual should be applied and  how the potential for false positives will be addressed. The controller must be ready to demonstrate their compliance.

I for one will be watching out intently for the Court of Appeal verdict on Ed Bridges vs South Wales Police and the findings from the UK-Australian investigations.

Update: Use of automated facial recognition by South Wales Police ruled unlawful – August 2020