This seems like a good time to take stock of the evidence on perhaps the biggest issue in the benefits system over the past few years: benefits sanctioning. The massive ESRC-funded ‘welfare conditionality’ project has this week published its final findings, prompting headlines that “Benefit sanctions found to be ineffective and damaging“ and “Benefit sanctions increasing poverty and pushing people into ‘survival crime,’ finds report”. The Parliamentary Select Committee is doing an inquiry, as is the Government’s former sanctioning reviewer Matt Oakley for one of the think-tanks. And I myself have written about in a paper last year and Demos report earlier this year.
But often missed in the hubbub are the subtleties of the evidence, a deeper effort to get to the bottom of what we know. Over a few blog posts, I want to interrogate some of the existing evidence, to challenge some of the easy interpretations that get mobilised for political debate. And this week it’s the turn of the Government’s 2016 pilots of conditionality for ESA claimants, which were quietly published last August – and unlike this week’s headlines, the pilots seemed to show that conditionality was effective.
A pilot of conditionality via ‘More Intensive Support’
The policy in question is something called ‘More Intensive Support’ (MIS). This was a mandatory series of extra Jobcentre interviews for a particular set of disability benefit claimants (ESA WRAG claimants returning from the Work Programme), which was claimed to test the effects of conditionality (more of which later). And unusually the DWP evaluated the impacts of MIS using a randomised control trial – one of very few such studies anywhere in the world that has been done on the effects of conditionality on sick/disabled claimants.
The quantitative evaluation of the RCT was published by Government in August 2017 within a ‘synthesis report’ (this covers both MIS and two slightly different sorts of pilots that I won’t talk about; there are also two more detail reports on aspects of the qualitative evaluations. There’s also an excellent summary of the reports from a mental health angle by Ayaz Manji at Mind).
On the surface, we seem to have an intervention based on conditionality for disabled people, evaluated using the strongest possible design. And the headline result was that MIS increased employment rates over the following year, as the chart below shows:
Not so fast: a considered take on the impact of MIS
However, the results of MIS are actually more nuanced for a bundle of different reasons:
- We can’t be very sure of whether MIS really did have a positive effect on employment or not – there is a wide confidence interval around the effects, to the extent that we would usually say we’re not confident of the results (or in cruder terms I don’t like: that it’s often not statistically significant at conventional levels).
- The effect isn’t very big – it amounts to an extra 1% of those affected by the intervention getting into work by the end of a year. The overwhelming majority (>95%) of these claimants were not working at the end of a year, which as the qualitative research makes clear, is because they had really severe barriers to work.
- MIS seemed to do as well (or better) at pushing people off benefits per se, than it did of getting them into work. You can see this in a couple of ways; that it reduced the numbers of people on benefit from about wk13 but only consistently increased the numbers in work from about wk30, and that it reduced the number of claimants by about 2%, roughly double the effect on employment.
- MIS seemed to be bad news for people with mental health problems, as far as we can tell (although the sample size is getting small at this point). The employment effect on those without mental health problems was +6.7% (with a 95% confidence interval of -0.7 to +14.0), but the effect on those WITH a mental problem problem was -1.1% (-5.5 to +3.3%). However, the effect on benefit claims seemed to be the same, suggesting that MIS was pushing people with mental health problems off benefits without getting them into work. This isn’t definitive – as you can see, the confidence intervals are wide – but this is hardly a ringing endorsement of MIS for this group.
I should add that these figures are all very clear in the evaluation report that DWP produced. The summary of the results is also generally OK, but it does gloss over the difference between the employment effect and the benefits effect, which is potentially misleading.
What exactly does MIS show anyway?
There is however a deeper problem here, about exactly what MIS shows – and this is a problem that we can see in other UK & US evaluations of conditionality for disabled people (such as the UK Support for the Very Long-Term Unemployed Trailblazer; see my 2018 Demos report p72). The problem is that this is not actually a test of ‘sanctions’ at all – it is a test of giving people more support that they can be sanctioned if they don’t take up. Given that we know that giving people more support is usually a good thing , then it’s very hard to know what these types of studies actually show.
To make matters worse, the DWP research reports give us effectively zero information on how much sanctioning actually happened in MIS. Indeed, it appears that some Jobcentres tried to subvert the purpose of the trial. In principle MIS participants were meant to receive 270 mins of work coach time per year (rather than 88mins), but in practice they received 115mins (and 70mins for the control). In explaining this, the report notes:
“administrative data suggests that some sites did not appear to distinguish between the intervention and the control participants, providing similar amounts of support to each group. This quantitative finding is corroborated by some Work Coach accounts that they did not feel comfortable treating individuals differently. Rather, they based their support on the individual claimant’s needs.”
Conversely, the qualitative research with claimants (p76) showed that claimants were often given the impression that voluntary interventions were mandatory anyway, something that I have heard anecdotal reports of elsewhere. That is: an intervention labelled as ‘nominal extra support with mandation’ can lead to little of either; whereas an intervention labelled as ‘voluntary’ can be felt to be mandatory.
In conclusion, on paper this pilot should have been a real contribution to the evidence base, helping us understand whether or not conditionality works for one group of sick/disabled benefit claimants. In practice, the impact assessments end up telling us surprisingly little (much as the accompanying qualitative reports are very revealing). In future weeks I will consider how much other forms of research on conditionality can tell us, and what our research priorities should be going forward.