Published: 27 February 2026. The English Chronicle Desk. The English Chronicle Online.
The Metropolitan Police are set to pilot handheld facial recognition devices to verify citizens’ identities during routine stops. Mayor Sadiq Khan confirmed that one hundred officers will test the technology over a six-month period, describing it as a tool to reduce unnecessary arrests while critics warn of potential civil liberties infringements. The announcement follows rising public concern over AI-powered policing, as handheld scanners resemble the software commonly used in smartphones and commercial applications. Facial recognition has previously been deployed by UK forces using cameras on vans and in fixed urban locations, demonstrating a growing reliance on biometric identification.
Past deployments in Croydon, Manchester, and South Wales revealed the technology’s capability to identify individuals retrospectively, though errors have occasionally led to wrongful arrests. In one recent instance reported by the Guardian, a man was mistakenly apprehended for burglary hundreds of miles from his residence due to algorithmic confusion with another person of similar ethnicity. Additionally, the Met signed a three-month £490,000 contract with American AI firm Palantir to detect rogue officers, highlighting a broader strategy of integrating artificial intelligence into law enforcement practices.
Zoë Garbett, the Green Party London Assembly member who prompted Khan’s disclosure, described the pilot as “an alarming change” that could redefine the relationship between police and the public. She raised concerns that officers would gain the ability to scan people on demand, potentially bypassing consent and eroding trust in policing institutions. Khan countered that the technology would only be employed during lawful stops when officers have doubts about a person’s identification, framing it as a practical alternative to detaining individuals unnecessarily at police stations.
“The only alternative the police have is to arrest that person and take them to the police station,” Khan explained, emphasising that the handheld devices aim to minimise inconvenience while ensuring accurate identity verification. He stressed that biometric data would only be retained temporarily and deleted immediately if no match is found, seeking to reassure the public about data protection concerns.
The pilot coincides with calls from the Equality and Human Rights Commission for independent oversight of facial recognition technologies in the UK. Policing minister Sarah Jones described the innovation as “the biggest breakthrough for catching criminals since DNA matching,” reflecting official optimism about its crime-fighting potential. Conversely, Mary-Ann Stephenson, chair of the equalities watchdog, highlighted risks of racial bias and false positives, warning that inaccuracies could cause human rights infringements and undue distress.
South Wales Police have already employed operator-initiated facial recognition using NEC’s NeoFace algorithm on smartphones, allowing officers to confirm identities of missing, wanted, or at-risk individuals when traditional verification methods are unavailable. The system may also assist in identifying deceased or unconscious persons, though covert use is prohibited. Civil liberties groups, including Big Brother Watch, have criticised the regulations as vague, raising fears of expansive use in situations unrelated to criminal activity.
“It’s shocking that I had to force the mayor to disclose that they are trialling operator-initiated facial recognition technology,” Garbett said. She argued that handheld devices introduce new privacy risks, as UK law generally does not require individuals to provide identification to police without strong justification. Unregulated scanning, she warned, threatens this fundamental legal safeguard.
The pilot follows prior statements from Khan, who indicated that the Metropolitan Police would consult stakeholders and consider ethical, legal, and policy implications before introducing operator-initiated facial recognition. In December, Sarah Jones launched a 10-week consultation emphasising the technology’s ability to remove dangerous criminals from the streets and strengthen community safety. Early trials, such as the Croydon live facial recognition project, reportedly led to the arrest of over one hundred wanted offenders within three months.
Lindsey Chiswick, the Met’s lead for facial recognition, described the operator-initiated devices as innovative tools designed to confirm identities quickly and accurately. She noted that the rollout would initially involve a small number of officers and highlighted the importance of immediate deletion of biometric data when no match is detected. Chiswick emphasised that the approach aims to reduce prolonged detentions while maintaining public safety.
Critics argue that even with safeguards, the expansion of handheld scanning represents a significant shift in police powers and public surveillance. There are concerns that false matches may disproportionately affect ethnic minorities, compounding existing tensions between communities and law enforcement. Legal experts note that current legislation does not explicitly address operator-initiated facial recognition, underscoring the need for robust statutory guidance to prevent misuse or overreach.
Supporters counter that the technology can streamline policing, allowing officers to verify identities without resorting to time-consuming detentions. By reducing administrative burdens and enhancing the speed of identification, proponents claim the devices can free officers to focus on broader crime prevention strategies. The government and police leadership frame the technology as complementary to existing investigative methods, integrating AI tools with traditional policing practices.
Public debate continues regarding the balance between effective law enforcement and civil liberties, with watchdog groups advocating for independent oversight to monitor accuracy, data retention, and proportionality of use. International experience shows that facial recognition technology often exhibits biases, reinforcing the argument for strict regulation to prevent unwarranted infringements on personal privacy. Observers warn that pilot programs, if expanded prematurely, could normalise invasive surveillance practices without adequate safeguards.
As London’s Metropolitan Police prepare to trial handheld facial recognition, stakeholders from government, civil society, and technology sectors are closely monitoring outcomes. The pilot represents a critical test case for the broader implementation of AI in policing, highlighting both potential efficiencies and ethical challenges. Future evaluations will likely determine whether handheld scanning can coexist with fundamental rights while enhancing public safety, making transparent oversight essential. The introduction of this technology sparks urgent questions about accountability, fairness, and public trust in the capital’s law enforcement.




























































































