Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Politics
Robert Booth Social affairs correspondent

DWP algorithm wrongly flags 200,000 people for possible fraud and error

Aerial view directly above above rows of back-to-back terrace houses
Thousands of UK households every month have had their housing benefit claims unnecessarily investigated based on the faulty judgment of an algorithm. Photograph: Teamjackson/Getty Images

More than 200,000 people have wrongly faced investigation for housing benefit fraud and error after the performance of a government algorithm fell far short of expectations, the Guardian can reveal.

Two-thirds of claims flagged as potentially high risk by a Department for Work and Pensions (DWP) automated system over the last three years were in fact legitimate, official figures released under freedom of information laws show.

It means thousands of UK households every month have had their housing benefit claims unnecessarily investigated based on the faulty judgment of an algorithm that wrongly identified their claims as high risk.

It also means about £4.4m has been spent on officials carrying out checks that did not save any money.

The figures were first obtained by Big Brother Watch, a civil liberties and privacy campaign group, which said: “DWP’s overreliance on new technologies puts the rights of people who are often already disadvantaged, marginalised and vulnerable in the backseat.”

The DWP said it was unable to comment in the pre-election period. Labour, which could be in charge of the system in less than two weeks time, has been approached for comment.

An information commissioner inquiry into algorithms and similar systems used by a sample of 11 local authorities last year reported: “We have not found any evidence to suggest that claimants are subjected to any harms or financial detriment as a result of the use of algorithms or similar technologies in the welfare and social care sector.”

But Turn2us, a charity that supports people who rely on benefits, said the figures showed it was time for the government to “work closely with actual users so that automation works for people rather than against them”.

To determine the risk that a claim could be wrong or fraudulent, the technology weighs claimants’ personal characteristics including age, gender, number of children and the kind of tenancy agreement they have.

Once the automated system flags a housing benefit claim as potentially fraudulent or erroneous, council staff are tasked with reviewing and validating whether claim details are correct, which involves seeking evidence from claimants over the phone or digitally. They must identify changes of circumstances and potentially recalculate claimants’ housing benefit awards.

The DWP decided to deploy the automated tool, which does not use artificial intelligence or machine-learning, after a pilot that showed 64% of cases flagged as high risk by the DWP model were indeed receiving the wrong benefit entitlement.

But outcomes of subsequent case reviews that claimants later faced revealed far less fraud and error. Only 37% of suspicious cases were wrong in 2020-21, 34% in 2021-22 and 37% in 2022-23. This is almost half as effective as the prediction.

Nevertheless, the system did save the taxpayer money, with every pound spent undertaking full case reviews of suspect claims returning £2.71 in savings, accordng to figures for 2021/22 released by the DWP.

Last year the DWP widened its deployment of artificial intelligence to uncover fraud and error in the universal credit system – which cost £6.5bn in the last financial year – despite warnings of algorithmic bias against groups of vulnerable claimants. It has been criticised for its lack of transparency about how it is using machine learning tools. In January it emerged the DWP had stopped routinely suspending benefit claims flagged by its AI-powered fraud detector. That move came in response to feedback from claimants and elected representatives.

Susannah Copson, a legal and policy officer at Big Brother Watch, said: “This is yet another example of DWP focusing on the prospect of algorithm-led fraud detection that seriously underperforms in practice. In reality, DWP’s overreliance on new technologies puts the rights of people who are often already disadvantaged, marginalised and vulnerable in the backseat.”

She warned of “a real danger that DWP repeats this pattern of bold claims and poor performance with future data-grabbing tools”.

“It was only recently that the government tried – and failed – to push through intrusive measures to force banks to conduct mass algorithmic monitoring of all customer accounts under the premise of tackling social security fraud and error. Although the powers failed to make it through legislative wash-up, concerns for DWP’s relentless pursuit of privacy-invading tech remain.”

• This article was amended on 24 June 2024 to remove a reference to a digital system provided by D4S DigiStaff. The system is not used to identify potential fraud and error cases for review as we suggested; rather it is used to process such cases after identification.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.