Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Politics
Robert Booth UK technology editor

Revealed: bias found in AI system used to detect UK benefits fraud

Person walks past a job centre in Borough, London
Claims for advances on universal credit payments are being examined by a biased AI system designed to detect fraud, it has emerged. Photograph: Mina Kim/Reuters

An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal.

An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.

The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The “statistically significant outcome disparity” emerged in a “fairness analysis” of the automated system for universal credit advances carried out in February this year.

The emergence of the bias comes after the DWP this summer claimed the AI system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers”.

This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human, and officials believe the continued use of the system – which is attempting to help cut an estimated £8bn a year lost in fraud and error – is “reasonable and proportionate”.

But no fairness analysis has yet been undertaken in respect of potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status, the disclosures reveal.

Campaigners responded by accusing the government of a “hurt first, fix later” policy and called on ministers to be more open about which groups were likely to be wrongly suspected by the algorithm of trying to cheat the system.

“It is clear that in a vast majority of cases the DWP did not assess whether their automated processes risked unfairly targeting marginalised groups,” said Caroline Selman, senior research fellow at the Public Law Project, which first obtained the analysis.

“DWP must put an end to this ‘hurt first, fix later’ approach and stop rolling out tools when it is not able to properly understand the risk of harm they represent.”

The acknowledgement of disparities in how the automated system assesses fraud risks is also likely to increase scrutiny of the rapidly expanding government use of AI systems and fuel calls for greater transparency.

By one independent count, there are at least 55 automated tools being used by public authorities in the UK potentially affecting decisions about millions of people, although the government’s own register includes only nine.

Last month, the Guardian revealed that not a single Whitehall department had registered the use of AI systems since the government said it would become mandatory earlier this year.

Records show public bodies have awarded dozens of contracts for AI and algorithmic services. A contract for facial recognition software, worth up to £20m, was put up for grabs last month by a police procurement body set up by the Home Office, reigniting concerns about “mass biometric surveillance”.

Peter Kyle, the secretary of state for science and technology, has previously told the Guardian that the public sector “hasn’t taken seriously enough the need to be transparent in the way that the government uses algorithms”.

Government departments, including the Home Office and the DWP have, in recent years, been reluctant to disclose more about their use of AI, citing concerns that to do so could allow bad actors to manipulate systems.

It is not clear which age groups are more likely to be wrongly targeted for fraud checks by the algorithm, as the DWP redacted that part of the fairness analysis.

Neither did it reveal whether disabled people are more or less likely to be wrongly singled out for investigation by the algorithm than non-disabled people, or the difference between the way the algorithm treats different nationalities. Officials said this was to prevent fraudsters gaming the system.

A DWP spokesperson said: “Our AI tool does not replace human judgment, and a caseworker will always look at all available information to make a decision. We are taking bold and decisive action to tackle benefit fraud – our fraud and error bill will enable more efficient and effective investigations to identify criminals exploiting the benefits system faster.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.