A police association representing thousands of officers has said it is considering bringing legal action against the Metropolitan Police over its use of an AI tool built by Palantir.
Over the weekend, the Met said it has launched hundreds of investigations into suspected rogue officers flagged by the tool. The force said three people have been arrested on suspicion of criminal offences including abuse of authority for sexual purposes, fraud, sexual assault, rape, misconduct in public office.
Meanwhile hundreds of others are being probed or issued prevention notices for offences including abusing the rota system for financial or personal gain and breaching the force’s hybrid working policy, it continued.
Responding on Monday, the Metropolitan Police Federation (MPF) called the use of the AI technology an “outrageous and unforgivable invasion of privacy” and said it is weighing legal action against the force.
The association, which represents more than 30,000 of the Met’s frontline police in London, warned officers not to use work devices when off duty and said it had not been informed that the AI would be used “to analyse the movements of cops in the capital”.
It comes after the Met said it would use Palantir’s technology to detect potential concerns in professional standards following a string of high-profile cases involving its officers, including Wayne Couzens, a serving Met Police officer who was jailed for life in September 2021 after he kidnapped, raped, and murdered Sarah Everard.
The force said the technology allows it to bring together data it already lawfully holds in one place to “spot patterns of behaviour and act on them”. It added the use of AI did not stop “further fact finding” in any cases.
But the MPF warned the use of AI would “seriously damage” the trust officers have in the force.
General secretary Matt Cane said police officers have a right to a private life and questioned the “checks and balances” put in place by the force.

“This use of AI will seriously damage the trust Metropolitan Police officers have in the force and ride a coach and horses through already plummeting morale,” he said, adding the use of AI to “spy” was “not proportionate, just or proper”.
“No one wants bad police officers in policing,” he said. “But this use of AI to spy on our officers is not proportionate, just or proper. It’s an outrageous and unforgivable invasion of privacy.”
He added the federation was aware of plans by the force to upgrade its existing lawful business monitoring software, but said it was “never informed that the upgrade would include the deployment of Palantir’s artificial intelligence”.
“This continuous 24/7 geo-location tracking is highly intrusive and risks monitoring officers when they are off duty, on rest days, or at home. This presumption of wrongdoing and attack on officer’s personal lives is unacceptable.
“Many officers remain unaware of the full extent of this monitoring and how the AI system analyses their location data. There is a clear risk that this information could be misused to question overtime claims, sickness absence, performance, or conduct without proper factfinding or context.
“Overall, the draconian approach raises significant legal and privacy concerns regarding proportionality, GDPR compliance, and the right to private life under Article 8 of the Human Rights Act.”
Mr Cane added: “The Federation now advises all members to be extremely cautious about carrying Metropolitan Police issued devices when off duty.
“The Federation is taking urgent legal advice on these matters and will issue further guidance to members in due course if required.”
Surveillance giant Palantir is best known for supplying analytics to the US Army, the New York Police Department and ICE. The company was co-founded by billionaire tech entrepreneur Peter Thiel, who was an early backer of US president Donald Trump.
A Met Police spokesperson said: “We have been open about the deep-rooted cultural issues facing the Met and the need to tackle them while supporting the majority of our workforce who act with professionalism and integrity every day.
“The new technology, alongside Lawful Business Monitoring, brings together data we already lawfully hold to spot patterns of behaviour and act upon them.
“The technology represents a significant step forward, enabling a stronger public health style approach focused on early identification, prevention and proportionate intervention.
“Automation of data, including the limited use of AI, supports analysis and information handling – it does not replace further fact-finding and professional judgement by officers in the Professionalism Directorate who deal with individual cases.”
Palantir have been contacted for comment.