Published: 23 February 2026 . The English Chronicle Desk. The English Chronicle Online
The Metropolitan Police has begun using artificial intelligence (AI) tools supplied by Palantir, a controversial US tech firm, to analyse internal data and flag patterns that could indicate potential officer misconduct or declining professional standards, according to reports. The new system — part of a pilot programme — is designed to help the force sift through data on sickness, absences and overtime across its workforce of around 46,000 officers and staff to detect unusual trends that may warrant further scrutiny.
Palantir’s software will not make disciplinary decisions on its own, and human officers remain responsible for interpreting any flags raised by the system, police officials have said. However, the move has sparked opposition from the Police Federation, which represents rank‑and‑file officers and describes the initiative as generating “automated suspicion” that could wrongly label routine issues — such as workload or illness — as indicators of wrongdoing. Critics argue that the opaque nature of the AI’s analysis risks misinterpretation and unfairly targets officers without clear transparency on how the algorithms function.
Palantir, co‑founded by tech investor Peter Thiel, is a controversial figure in tech and politics, known for supplying data‑analysis tools to US border enforcement agencies and foreign military forces. Its contracts with UK public sector bodies — including the NHS and Ministry of Defence — have previously drawn scrutiny from privacy campaigners and some members of Parliament who have raised concerns about data use and civil liberties.
Proponents of the Met’s approach say that AI can help identify subtle patterns across large datasets that might otherwise be overlooked, potentially strengthening institutional integrity and early intervention on professional issues. Palantir and policing supporters also argue that such tools can improve efficiency and free up human resources. Yet the debate reflects wider concerns about the increasing use of AI in public services, where accountability, transparency and potential bias remain persistent issues.




























































































