Published: 26 February 2026. The English Chronicle Desk. The English Chronicle Online.
A shocking facial recognition error has left Alvi Choudhury, a Southampton-based engineer, questioning the safety of AI systems in law enforcement. Choudhury, 26, was arrested in January at his family home after police linked him to a £3,000 burglary in Milton Keynes, more than 100 miles from Southampton. The arrest followed a match produced by automated facial recognition technology deployed across several UK police forces, highlighting concerns about algorithmic bias and wrongful arrests. This incident has renewed debates about the reliability of facial recognition software and the consequences of misidentification.
Choudhury described the arrest as bewildering, noting the suspect captured on CCTV appeared significantly younger and had different facial features. He explained that the footage showed a man with lighter skin, no facial hair, larger nose, and smaller lips. The only apparent similarity was curly hair, leading him to question why officers believed he matched the suspect. “Everything was different. The suspect looked 18 years old,” Choudhury said, emphasising that his own beard and facial structure clearly differed from the footage. He believed officers may have focused on his ethnicity and hair texture when making the decision.
The technology behind the arrest was procured by the Home Office from Cognitec, a German company, and is used to compare faces against the UK-wide police national database. The system reportedly conducts around 25,000 searches each month across 19 million mugshots. According to the National Police Chiefs’ Council, matches generated by this system should be treated as intelligence rather than conclusive evidence, and human verification is intended to guide further investigation. Thames Valley Police stated that a human visual assessment preceded Choudhury’s arrest, although critics argue this failed to prevent a clear misidentification.
Home Office-commissioned research released in December revealed that facial recognition software produces higher false positive rates for black and Asian faces. At certain settings, the study found false positives for Asian faces at 4.0 percent and black faces at 5.5 percent, compared with just 0.04 percent for white faces. Police and crime commissioners described these results as evidence of “concerning in-built bias,” cautioning that wrongful arrests may occur more by chance than due to effective checks. This case exemplifies the tangible impact of such biases on individuals’ lives and livelihoods.
Thames Valley Police have deployed live facial recognition in public areas, including Oxford, Slough, Reading, Wycombe, and Milton Keynes, scanning approximately 100,000 faces to date. Six arrests have been made using the technology, but incidents like Choudhury’s raise serious questions about oversight and accountability. Despite providing evidence of his work commitments in Southampton on the day of the Milton Keynes burglary, he remained in custody for nearly ten hours before being released at 2am. The arrest has caused Choudhury significant personal distress, interrupted his work, and generated anxiety within his family.
Choudhury is pursuing legal action against Thames Valley Police and Hampshire Constabulary, who executed the arrest. He seeks both damages and greater transparency regarding wrongful arrests resulting from facial recognition technology. His previous encounter with the police system in 2021, when he was wrongly arrested after being attacked in Portsmouth, had already placed his mugshot in police records. Choudhury now fears that repeated exposure to automated systems could trigger future misidentifications. He raised concerns about how this impacts professional opportunities, particularly security clearance for government clients, as arrest records can unfairly influence assessments of trustworthiness.
The responses from Thames Valley Police have been contradictory. While the force acknowledged that bias in facial recognition technology “may” have contributed to the arrest, an officer reportedly stated that the incident did not require wider organisational learning because the technology is already under strategic review. Police spokespeople denied the arrest was unlawful, asserting that the decision relied on human visual assessment following a retrospective match, and was not influenced by racial profiling. Choudhury, however, recounted that officers laughed when he questioned the similarity between himself and the CCTV suspect, revealing a lack of seriousness in addressing the error.
Warnings about automated facial recognition software have been mounting for years. In December 2024, William Webster, the UK biometrics and surveillance camera commissioner, expressed concern over the continued retention and use of images for individuals never charged following arrest. The month prior, South Wales Police settled a claim for damages with a black man wrongfully detained for 13 hours due to a facial recognition match. Legal experts argue that AI cannot replace human judgment in investigations, emphasising the importance of due diligence to prevent injustices.
Choudhury’s lawyer, Iain Gould of DPP Law, highlighted that artificial intelligence must be carefully partnered with human assessment rather than substituted for it. The case underscores the risks associated with automated policing tools, particularly when errors disproportionately affect ethnic minority communities. Advocates call for stricter regulation, improved transparency, and independent verification to prevent systemic biases from undermining public confidence. The Home Office has stated that guidance and training to minimise errors are under review, alongside development of a national facial matching system featuring an independently tested algorithm.
The incident has sparked broader debate about ethical use of surveillance technologies, particularly regarding civil liberties, racial bias, and accountability in policing. Critics argue that repeated wrongful arrests, even if rare, can erode trust in law enforcement and jeopardise individuals’ careers and reputations. While supporters of AI integration in policing emphasise efficiency and crime prevention, cases like Choudhury’s demonstrate that technology alone cannot ensure justice, highlighting the need for procedural safeguards and transparent reporting mechanisms.
Choudhury’s experience illustrates the human cost of algorithmic errors and highlights the need for continuous oversight of AI tools in law enforcement. Experts stress that technological innovation must be paired with robust governance frameworks to prevent discrimination and wrongful detentions. The combination of human error and biased algorithms, they argue, can have serious consequences for ordinary citizens, whose lives may be upended by mistaken identification. This incident also serves as a warning to policymakers and technology developers about the ethical implications of deploying automated facial recognition without adequate safeguards.
As the UK continues to expand the use of AI in policing, cases such as Choudhury’s show the importance of transparency, accountability, and independent review. Legal, technical, and ethical concerns must be addressed to restore public confidence and prevent further incidents of wrongful arrest. Civil rights advocates emphasise that while AI can support investigations, reliance on flawed or biased systems risks undermining both justice and public trust. Choudhury’s ongoing legal case may help shape policy decisions, prompting improved oversight and safeguards for automated identification technologies in law enforcement across the country.
The debate over facial recognition technology is not only about technical performance but also societal impact. Choudhury’s ordeal demonstrates that even sophisticated systems cannot fully replace careful human assessment and accountability. The repercussions of wrongful arrest extend far beyond immediate distress, affecting employment, mental health, and perceptions of fairness in the justice system. Advocates continue to call for robust frameworks, independent testing, and transparent reporting to ensure technology supports, rather than undermines, the rights of individuals.




























































































