Facial Recognition Technology in UK Policing: High Accuracy but Public Awareness Trails Behind
The UK government is set to significantly expand the use of facial recognition technology (FRT) by police forces across England and Wales, with plans to increase live facial recognition vans from 10 to 50. This move aims to provide every police force in these countries with access to live FRT capabilities alongside a £26 million investment for a national system and an additional £11.6 million specifically for live facial recognition technologies. The announcement arrives amidst a 12-week public consultation on police use of FRT, underscoring the technology’s growing prominence in law enforcement practices.
Expanding the Reach of Facial Recognition in Policing
Facial recognition technology has already seen considerable use in the UK, with police forces employing it primarily in three distinct ways. All UK police forces have the capability to apply “retrospective” facial recognition, analysing images from CCTV footage to identify suspects after events. Furthermore, 13 out of 43 forces use live FRT in public spaces to detect wanted or missing persons in real time. Additionally, South Wales and Gwent police forces use a mobile app enabling officers to take a photo during stops and compare it against watchlists featuring individuals of interest due to criminal activity or missing status.
This expansion reflects the technology’s success; for instance, the Home Secretary Shabana Mahmood highlighted that FRT had already contributed to approximately 1,700 arrests by the Metropolitan Police alone, signaling substantial potential for crime prevention and investigation.
How Accurate Is Facial Recognition Technology Today?
Contrary to common misconceptions, facial recognition does not store photographs but rather creates digital numerical representations of faces, which it compares against database images to assess similarity. Progress thanks to deep convolutional neural networks—advanced artificial intelligence models designed to mimic human brain processing—has dramatically enhanced the technology’s accuracy.
According to the US National Institute of Standards and Technology (NIST), regarded as the global benchmark for assessing FRT algorithms, leading systems have false negative rates (missed identifications) of less than 1% and false positive rates (incorrect identifications) around 0.3%. Similarly, the UK’s National Physical Laboratory reports that the system in use by UK police correctly identifies individuals in about 99% of cases, striking a balance between few missed matches and few false alarms.
In comparison, human operators undertaking similar face-matching tasks can have error rates exceeding 30%, illustrating that modern FRT often outperforms unaided human judgment.
Addressing Past Concerns Over Bias
Historically, FRT has been criticized for higher error rates with non-white faces—in 2018, one study revealed the error rate for darker-skinned women was 40 times higher than for white men. These discrepancies were largely due to limited and unbalanced training datasets that predominantly featured white male faces.
However, more recent systems deployed in the UK, US, and elsewhere have been trained on significantly larger and demographically balanced datasets, incorporating proactive measures to minimize biases. While slight variations in error rates by ethnicity persist, current false positive rates for non-white faces now fall below 0.5%, marking a significant improvement and reducing concern over systemic racial bias in the technology’s application.
Public Understanding and Trust Remain Limited
Despite the technological advancements and increased use, public awareness and understanding of FRT’s capabilities lag behind. A January 2026 survey involving 1,001 respondents across England and Wales found only about 10% felt confident in their knowledge of how and when police use the technology—a notable increase from 2020 but still relatively low.
Interestingly, while nearly 80% reported feeling comfortable with police using FRT to search watchlists, only around 55% trusted the police to use it responsibly. Compared to data from 2020, this shows a decrease in trust (from 79% to 55%) and comfort (from 63% to 55%) in some respects, indicating growing concerns around the responsible deployment of FRT.
The public remains particularly supportive of police use of FRT for criminal investigations (89%), searching for missing persons (89%), and identifying individuals who have committed crimes (89%), consistent with previous findings. Still, the gap between acceptance and understanding suggests a need for improved transparency and education.
Recommendations for the Future
Research by academics at the University of Lincoln and the University of Reading highlights the importance of keeping the public informed about the realities and limitations of facial recognition technology. As FRT adoption widens in policing, clear communication about its benefits, safeguards, and ethical use is critical.
Authors suggest that the forthcoming legal framework regulating FRT should not only govern police use but extend to all sectors employing the technology. This comprehensive approach could preserve public trust by ensuring consistent standards and preventing misuse by non-police actors.
A streamlined, national policing service—proposed in the government’s recent white paper—could facilitate uniform use of up-to-date FRT systems across all forces, alongside consistent training to prevent demographic biases and ensure fair application.
Conclusion
Facial recognition technology employed by UK police has advanced remarkably in accuracy and effectiveness, contributing to numerous arrests and investigations. However, to fully harness its potential while maintaining public trust, it is crucial to close the gap in public understanding through transparency, education, and robust regulation that covers all users of the technology. As the technology becomes more prevalent, a balanced approach that respects privacy rights and prevents bias will be central to its successful integration in policing.
This article is based on research conducted by Kay Ritchie, Associate Professor in Cognitive Psychology at the University of Lincoln, and Katie Gray, Associate Professor at the University of Reading, as published in The Conversation.






