Call for ban on use of live facial recognition

October 2023

Live facial recognition is being used by UK police forces to track and catch criminals and may be used by retailers to crack down on shoplifting. Is live facial recognition a force for good or a dangerous intrusion on people’s privacy?

The announcement by the UK Government of plans for police to access passport photos to help catch criminals has led to a call for an immediate ban on live facial recognition surveillance.

The accuracy of the algorithms behind this technology are being questioned, as are the privacy implications. Where facial recognition is used, there needs to be a strong justification for its use and robust safeguards in place to protect people.

What is live facial recognition?

Live facial recognition (LFR) is a broad term used to describe technologies that identify, catalogue and track human faces. The technology can be used in many ways but probably the biggest topic of debate relates to the use of facial images captured via CCTV or photos which are processed via biometric identifiers.

These identifiers typically include the unique ratios between an individual’s facial features, such as their eyes, nose and mouth. These are matched to an existing biometric ‘watchlist’ to identify and track specific individuals.

Use of LFR by UK police forces

The Home Office says facial recognition has a ‘sound legal basis’, has already led to criminals being caught and could also help the police in searching for missing or vulnerable people.

Facial recognition cameras are being used to scan the faces of members of the public in specific locations. Currently UK police forces using the technology tell people in advance about when and where LFR will be deployed, with physical notices alerting people entering areas where it’s active.

However, the potential for police to be able to access a wider range of databases, such as passports, has led a cross-party group of politicians and privacy campaigners say both police and private companies should ‘immediately stop’ their use of such surveillance, citing concerns about human rights and discrimination.

Silkie Carlo, Director of Big Brother Watch says; “This dangerously authoritarian technology has the potential to turn populations into walking ID cards in a constant police line-up.”

It’s worth noting in 2020 the Court of Appeal in the UK ruled South Wales Police use of facial recognition was unlawful.

Use of LFR by retailers

Some of the UK’s biggest supermarkets and retailers are also turning to face-scanning technology in a bid to combat a significant rise in shoplifting.

Earlier this year the ICO announced its findings from an investigation into the live facial recognition technology provided to the retail sector by the security firm Facewatch. The aim of the technology is to help businesses protect their customers, staff and stock.  People’s faces are scanned in real time as they enter a store and there’s an alert raised if a subject of interest has entered.

During its investigation the ICO raised concerns including surround the amount of personal data collected and protecting vulnerable people by making sure they don’t become a ‘subject of interest’. Based on information provided by Facewatch about improvement made, and ongoing improvements, the ICO concluded the company had a legitimate purpose for using people’s information for the detection and prevention of crime.

Collaboration between police and retailers

Ten of Britain’s largest retailers including John Lewis, Next and Tesco are set to fund a new police operation. Under Project Pegasus, police will run CCTV pictures of shoplifting incidents provided by the retailers against the Police National Database. It’s anticipated the project will be funded by retailers.

The risk of false positives

The use of Live Facial Recognition raises significant privacy and human rights concerns, such as when it is used to match faces to a database for policing and security purposes.

A 2019 study of facial recognition technology in the US by National Institute of Standards and Technology (NIST) discovered that systems were far worse at identifying people of colour than white people. Whilst results were dependent on the algorithms used, NIST found that some facial-recognition software produced far higher rates of false positives for black and Asian people than whites, by a factor of 10 to 100 times.

NIST also found the algorithms were worse at identifying women than men. Clearly there are huge concerns to be addressed, brought into sharp focus now with the Black Lives Matter movement. Interestingly, there was no such dramatic difference in false positives in one-to-one matching between Asian and white faces for algorithms developed in Asia.

Privacy concerns

Any facial recognition technology capable of uniquely identifying an individual is likely to be processing biometric data (i.e. data which relates to the physical, physiological or behavioural characteristics of a person).

Biometric data falls under the definition of ‘special category’ data and is subject to strict rules. To compliantly process special category data in the UK or European Union, a lawful basis must be identified AND a condition must also be found in GDPR Article 9 to justify the processing. In the absence of explicit consent from the individual however, which is not practical in most LFR applications, it may be tricky to prove the processing meets Article 9 requirements.

Other privacy concerns include:

  • Lack of transparency – an intrusion into the private lives of members of the public who have not consented to and may not be aware of the collection or the purposes for which their images are being collected and used.
  • Misuse – images retrieved may potentially be used for other purposes in future.
  • Accuracy – inaccuracies inherent within LFR reference datasets or watchlists may result in false positives and the potential for inaccurate outcomes which may be seen as biased or discriminatory.
  • Automated decision-making – if decisions which may significantly affect individuals are based solely on the outcomes of live facial recognition.

Requirement to conduct a Data Protection Impact Assessment (DPIA)

A DPIA must be conducted before organisations or public bodies begin any type of processing that is likely to result in a ‘high risk’ to the rights and freedoms of individuals.

This requirement includes:

  • the use systematic and extensive profiling with significant effects on individuals;
  • the processing special category or criminal offence data on a large scale; and
  • the systematic monitoring of publicly accessible places on a large scale.

In our view, any planned use of LFR is very likely to fall under the requirement for the organisation or public body to conduct a DPIA in advance of commencing the activity and take appropriate steps to ensure people’s rights and freedoms are adequately protected.

So where does this leave us?

Police forces and other organisations using LFR technology need to properly assess their compliance with data protection law and guidance.

This includes how police watchlists are compiled, which images are used and for what purpose, which reference datasets they use and how accurate and representative of the population these datasets . The potential for false positives or discriminatory outcomes should be addressed.

Any organisation using LFR must be ready to demonstrate the necessity, proportionality and compliance of its use.

Meanwhile, across the Channel, members of the European Parliament have agreed to ban live facial recognition using AI in a draft of the EU’s Artificial Intelligence Act. Will the UK follow suit?