DWP wrongly forces claimants through investigation

  • Post last modified:June 24, 2024
  • Reading time:9 mins read


The Canary has previously reported how the Department for Work and Pensions (DWP) reliance on AI and algorithmic technology for benefit fraud detection could put disabled and chronically ill claimants at risk. Now, new data obtained by a campaign group has shown how the DWP is already discriminating against claimants – specifically, housing benefit ones.

It revolved around a secretive algorithm that the department uses to determine which claims warrant investigation for potential fraud. Of course, as it turned out, the majority of cases were legitimate claimants. As a result, the DWP wrongly forced over 200,000 housing benefit claimants through unjustified fraud investigations.

Benefit fraud: flawed algorithm wrongly flags claimants

As the Guardian reported:

More than 200,000 people have wrongly faced investigation for housing benefit fraud and error after the performance of a government algorithm fell far short of expectations, the Guardian can reveal.

Two-thirds of claims flagged as potentially high risk by a Department for Work and Pensions (DWP) automated system over the last three years were in fact legitimate, official figures released under freedom of information laws show.

It means thousands of UK households every month have had their housing benefit claims unnecessarily investigated based on the faulty judgment of an algorithm that wrongly identified their claims as high risk.

It also means about £4.4m has been spent on officials carrying out checks that did not save any money.

UK civil liberties campaign group Big Brother Watch acquired the data via a series of Freedom of Information (FOI) requests.

The group has previously highlighted the DWP’s alarming use of automated algorithms. For instance, the department utilises a ‘Risk Based Verification’ (RBV) system to decide the level of verification the DWP requires for each claim.

Moreover, in 2021, Big Brother Watch revealed how councils have used an automated tool to assign “risk scores” to housing benefit claimants. This is supposed to determine the likelihood of claimants committing benefit fraud. Notably, its investigation found that:

540,000 benefits applicants are secretly assigned fraud risk scores by councils’ algorithms before they can access housing benefit or council tax support

The group expressed how this “mass profiling” and “citizen scoring” process is:

secretive, unevidenced, incredibly invasive and likely discriminatory

Similarly, the DWP’s own algorithm technology:

weighs claimants’ personal characteristics including age, gender, number of children and the kind of tenancy agreement they have.

Predictably, Big Brother Watch found that this mass profiling tool had flagged many claimants wrongly.

As the Guardian reported, the tool was only around half as effective as the DWP had estimated. Specifically, cases of fraud and error made up just a third of those the algorithm singled out for investigation between 2020 and 2023. This compared to the 64% fraud and error the department’s pilot of the algorithm identified.

Benefit fraud is a favourite Tory fairytale

In April, the DWP boasted about its big benefit fraud-busting breakthrough when it sentenced three actual benefit fraudsters. However, as the Canary pointed out, this is an exception to the rule, not the norm.

So, after celebrating this, it turns out the department has been wrongly investigating hundreds of thousands of claimants for benefit fraud.

Of course, this is not surprising. Because the DWP has been overzealously inflating its benefit fraud statistics anyway. For one, as the Canary’s Steve Topple has underscored, the department lumps in DWP and claimant error in its fraud data. On top of this, he detailed how much of its so-called fraud is non-evidence based. As such, Topple concluded that:

much of the £8.3bn the DWP promotes as fraud (and that the media dutifully laps up) is just based on assumptions and guesswork.

In other words, most of the department’s so-called fraud estimations are, in fact, bogus. Despite this, the Tory government has repeatedly used its fraud statistics to pursue its anti-Welfare agenda.

Now, the results of this are coming home to roost – and predictably, it’s marginalised claimants who the department’s failures have impacted once again.

Algorithms already discriminating against claimants

None of this is to even mention that this isn’t the first time a DWP benefit fraud algorithm has been found wanting. Notably, previous reports have highlighted that the DWP may be discriminating against marginalised groups with algorithms it uses to select claimants for fraud investigations.

First, in 2021, the Greater Manchester Coalition of Disabled People (GMCDP) launched legal action against the DWP for another such instance of potential bias. As the Disability News Service (DNS) reported, the campaigners raised concerns that a secret algorithm could be:

over-picking disabled people for investigations of fraud

Then, in 2023, the public spending watchdog the National Audit Office (NAO) identified that there was an:

inherent risk that the algorithms are biased towards selecting claims for review from certain vulnerable people or groups with protected characteristics.

Or, as Victoria Derbyshire summarised, the NAO’s report suggested that the DWP’s algorithms could be “sexist, ageist, or racist”.

Aside from benefit fraud detection, DWP algorithms have also fallen foul elsewhere. For instance, in 2020, a Human Rights Watch (HRW) report highlighted how the algorithm the department used to calculate benefits was also “flawed”. Significantly, its means-tested benefit system was “overestimating” earnings, and impacting those with irregular or low-paid jobs. As a result, it was pushing these people into deeper poverty.

Despite these repeated issues, the Tories had lined up plans to expand its use of algorithms, alongside AI technology for benefit fraud detection. The government was aiming to ramp this up with its Data Protection and Digital Information Bill. Through this, the Tories intended to implement a mass algorithmic surveillance of claimants’ bank accounts. However, due to a technicality around the general election, the bill was thrown out in May.

A scandal with “shades of Horizon”

In January, the House of Commons work and pensions committee grilled DWP senior civil servant Peter Schofield on the department’s annual report.

As the DNS reported, at one point, the committee honed in on the DWP’s AI algorithm technology. Conservative MP Desmond Swayne quizzed Schofield whether there were “shades of Horizon” about the DWP’s use of this.

Schofield replied:

I really hope not.

However, clearly this is already unfolding. Big Brother Watch’s latest findings, alongside the DWP’s previous admissions of discriminatory algorithms, shows that the DWP is once again screwing over more marginalised people in the name of its benefit fraud “crackdown” crusade. And of course, this is just the tip of the iceberg.

This year alone:

  • The UN Committee on the Rights of Persons with Disabilities (UNCRPD) found the DWP had committed “grave and systematic” violations of disabled people’s human rights.
  • The Equality and Human Rights Commission (EHRC) has launched an inquiry into benefit deaths. Previously, the Canary’s Steve Topple has shown that the DWP’s callous policies have caused the deaths of tens of thousands of people since 2011. However, as we’ve also highlighted, this inquiry is likely to fall short of holding key DWP bosses and the department at large to account.
  • The Canary has also identified that the DWP has stripped over 180,000 people of their benefits.

Now, putting over 200,000 housing benefit claimants through unwarranted benefit fraud investigations adds to this growing list of harms. In other words, the DWP’s ‘Horizon scandal’ is already here – and it has been brewing for a long time.

Feature image via UK Care Guide – Youtube





Source link