The government has widened its deployment of artificial intelligence to uncover welfare fraud, despite warnings of algorithmic bias against groups of vulnerable claimants.
In a £70m investment applying “advanced analytics” to requests for universal credit (UC), the Department for Work and Pensions (DWP) has extended the use of machine learning as it attempts to save more than £1bn from the £8bn-plus lost to fraud and error last year, audit documents scrutinised by the Guardian reveal.
The project does not appear to have been formally announced by the government, which has been accused of being secretive about AI in the welfare system. If extended, it has the potential to reach many of the 5.9 million people who claim UC.
After a trial last year using automated software to flag up potential fraudsters seeking UC cash advances, similar technology has now been developed and piloted to scan welfare applications made by people living together, self-employed people, and those wanting housing support, as well as to assess the claims people make about their personal capital such as savings.
Welfare rights organisations and UN experts have previously said that extending the UK’s “digital by default” welfare system with machine learning without greater transparency risks creating serious problems for some benefit claimants.
Big Brother Watch, a UK civil liberties campaign group, said it was “worrying” the government was “pushing ahead with yet more opaque AI models when it admits its capability to monitor for unfairness is limited”.
“The DWP consistently refuses to publish information about how they operate bar the vaguest details,” said Jake Hurfurt, head of research and investigations.
The extension of welfare automation is summarised in the comptroller and auditor general’s statement in the DWP’s latest annual report. It says the DWP has “tight governance and control over its use of machine learning”, but it revealed officials have detected “evidence of bias toward older claimants”.
An algorithm trialled last year was fed data about previous claimant fraud to teach it to predict which new benefit claimants might be cheating. Final decisions are made by a human.
The auditor general warned of “an inherent risk that the algorithms are biased towards selecting claims for review from certain vulnerable people or groups with protected characteristics”.
The DWP had admitted its “ability to test for unfair impacts across protected characteristics is currently limited”, they said.
The government is under pressure to increase public confidence in its use of AI, and the auditor general urged it to publish its assessment of bias in machine learning models.
The DWP has so far declined to release information about how its machine learning algorithm works. It has blocked freedom of information requests from the Guardian seeking the name of any companies involved in the fraud detection trial relating to universal credit advances because “details of contracts are commercially sensitive”. It is also refusing to publish any results from last year’s trial, including assessing bias against certain groups of claimants.
Last month, Tom Pursglove, minister for disabled people, health and work, told parliament the government would not publish an equalities assessment of trials of machine-learning algorithms, because it would tip off fraudsters “leading to new frauds and greater losses to the public purse”.
The DWP has told the auditor it is “working to develop its capability to perform a more comprehensive fairness analysis across a wider range of protected characteristics”.
A spokesperson for the DWP said: “We continue to explore the potential of new technologies in combating fraud and deploy comprehensive safeguards when doing so. AI does not replace human judgment when investigating fraud and error to either determine or deny a payment to a claimant.”