Why evidence-based policy won’t tackle inequalities

A stamp of evidenceThere are some buzzwords that can win an argument all by themselves.  You can’t describe yourself as against ‘fairness’ or ‘freedom’, for instance, or object to ‘social justice’ – however wrong-headedly they’re being used. And for policy-focused researchers, our clinching buzzword of choice is ‘evidence-based policy’.

So when I say I’m about to argue that evidence-based policy (EBP) is dangerous, I know I’m facing an uphill battle.  But over 2011 I want to explain the many failings of EBP – and perhaps even persuade you to be more sceptical about the unthinking use of ‘evidence’.

Work stress, capitalism and evidence

For this post, I just want to give you two quick examples of one problem with EBP, which frustrated me in the closing months of 2010.

At the end of October, I found myself in a historic building on Great Carlton Street, one of the more exclusive addresses on the edge of the royal parks in the heart of London. Tarani Chandola – a really excellent sociologist/epidemiologist  – was launching his British Academy report on work stress, to an audience of the great and the good (including Michael Marmot, and the head of the Trades Union Congress). Before describing the many harms caused by job strain, Tarani began with the following graph:

In other words, Britain has seen an astonishing rise in job strain over the past 20 years.  (Strain here meaning a high-demands, low-control job).  You’d therefore expect the recommendations from the report to talk about tackling the likely causes of this – which for private sector jobs are structural changes in British capitalism, and for public sector jobs is the doctrine of ‘New Public Management’. (More on both of these in future, as this is the topic of my PhD thesis…).

But such recommendations were nowhere to be seen.  Instead, there was an exclusive focus on interventions that target either individual people or individual workplaces, together with health & safety law.  When I asked Tarani why his policy recommendations didn’t match his most striking graph, he simply said, ‘there’s no evidence on what interventions would work at the societal level, so while I wanted to include it, I had to keep the conclusions evidence-based’.

Tarani Chandola is right, of course.  The fault isn’t with him (and I really recommend his report as an accessible and thorough overview of the work stress literature).  The fault is with what evidence-based policy has become.  Before bringing this point out, though, another example.

Evidence-based reductions in health inequalities

I see the same problem when we try and talk about ‘evidence-based interventions’ to tackle health inequalities.  Late in 2010, we had a report on inequalities in life expectancy from probably the most powerful groups of MPs outside Government – the Public Accounts Select Committee. Their report was highly critical about New Labour’s record, noting that “inequality in health has increased” and that targets have been consistently missed.

Health inequalities obviously depend on a variety of upstream social factors, which we can call ‘inequality-generating mechanisms’. Did the Committee’s report mention these at all? Well, no.  They actually focused on delays and implementation issues in the government’s health inequalities strategy, with their most useful suggestions concerning the commissioning strategies in primary care.

But while their suggestions are sensible, the idea that this would have been sufficient to meet the Government’s health inequalities targets is dubious. I’ve previously written on Inequalities about Labour’s unflattering record on inequality-generating mechanisms. Yet the Public Accounts Committee’s role on evidence-based, value-for-money-focused policy restricts it evaluating policies in terms of micro-management – rather than whether the policy could ever actually succeed in its aims.

How to tackle inequality

In critiquing the idea of evidence-based policy, I’m obviously not calling for a return to ‘opinion-based policy’ – if I didn’t believe that social science had a valuable role in policymaking, I wouldn’t dedicate my life to it.  Yet much as I love experiments, the most important determinants of inequalities are often impossible to evaluate using the logic of conventional evaluation research.  The current model of EBP is therefore flawed; not just wrong, but actively damaging to attempts to reduce inequalities.

The question is: if we want to provide evidence with a wider scope, what do we replace EBP with?  The recent Marmot Review on health inequalities in England reassuringly uses research on a broader, societal-level canvas – including on work stress.  But the review also struggles with the role of evidence, glossing over major research gaps, recommending radical policies that go way beyond the science, and generally giving the appearance of a polemic rather than an evidence-gathering exercise. (An unfortunate impression, as there’s enormous amounts of great research in there). This illustrates further problems with the current model of evidence & policy, which I’ll come back to on another occasion.

Like those other powerful, symbolically loaded terms, ‘evidence-based policy’ is too useful a term to be consigned to the dustbin. The point is rather to contest its meaning, and to turn into something that has a realistic chance of reducing inequalities. Battles over terminology are not at the glamorous end of social science – but their importance in setting the direction of research and policy cannot be underestimated.


12 responses to “Why evidence-based policy won’t tackle inequalities”

  1. Happy 2011, Ben. Thanks for the post. Since you’ll be returning to these issues, I have what may be a useful follow-up question. Your main critique seems to concede the (current) inability of social science to identify ways to address the real upstream causes of the outcomes that concern you. You then reveal dissatisfaction when social scientists do not recommend policies for which there is not hard and fast evidence of efficacy or even systemic relevance. Finally, you also criticize the Marmot review for “glossing over major research gaps, recommending radical policies that go way beyond the science, and generally giving the appearance of a polemic rather than an evidence-gathering exercise.” This, you say, “illustrates further problems with the current model of evidence & policy.”

    Since I was beginning to understand the “current model” as the model that says “tread very lightly, unless the evidence is ironclad,” your comments on Marmot suggest that the current model may not be as conservative as that. So can you say more specifically, either here or in another post, what this current model is? I’m really looking forward to following your ruminations on EBP this year, but I want to be sure I have the foundation I need from the start.

  2. Hi Paul – happy new year too!

    I wasn’t trying to put the Marmot Review within the current model of EBP (I should have made this more clear). The Marmot Review is a challenge to the EBP model, and for this it deserves enormous credit. Yet in rejecting EBP, it creates major problems of its own.

    In terms of your comment that “You then reveal dissatisfaction when social scientists do not recommend policies for which there is not hard and fast evidence of efficacy or even systemic relevance” – this is a fair comment from the piece, but it wasn’t quite what I was trying to say. I definitely don’t want to suggest that researchers should be able to recommend whatever they want in the absence of evidence, playing fast and loose with the truth. Indeed, that sort of behaviour will be the subject of a separate rant!

    My view of social science at the moment is that much of it is torn between a rigid conception of EBP, and the ‘critical social science’ that leaves the truth way behind. So there’s a need for a ‘middle way’ – a way that maintains the value and robustness of social science, without being systematically biased against macro-level change. This is a slightly ambitious project though, so critical feedback all the way through will be hugely appreciated…

  3. Hi Ben. Nice post. Prof. Chandola’s reply to your question reminds me of Nasrudin’s parable of the lost key. A drunk loses his key in the dark. He is found searching under a lamp post down the street. When asked why he is looking there, and not where he lost the key, he answers ‘but the light is better here’. In a unit we’re both familiar with, this seemed to be one of the principal modus operandi, as I describe in an article just published by the Journal of Social Policy: http://bit.ly/geNeGg.

    But if not evidence-based policy, then what?

    • Thanks for posting the link Alex, and the article looks fantastic – I’ll definitely be in touch with my thoughts when I’ve read through it, and I may well blog about it as well!

      I do have some thoughts on what could replace ‘evidence-based policy’, but the trouble is that we need to re-think everything to begin with. That is, we need to go back to first principles and think about (i) what is ‘truth’; (ii) why do we think social science has a special claim to truth; and (iii) how does this role fit with the critical dimension of social science. (I used to think that philosophy was irrelevant, up until the point that I couldn’t answer this question without it…).

      As I said in my reply to Paul’s comment, this is a wildly demanding way of answering the question, and it will take a large number of people to piece everything together. I’ll try and write my brief thoughts on this over the year, but these will only be brief. Hopefully in the next few years I can persuade someone to fund me to think about nothing else for six months 🙂

  4. Ben (if I may),

    I’m a bit confused by this post. Several years ago, a theme issue on evidence-based medicine was published in which a group of authors argued that EBM has more to fear from its friends than its enemies.

    What they contended — in my interpretation, of course — was certainly not that it is a wise idea for health care providers to proceed blithely without examination of the available evidence, but rather that a highly technocratic, somewhat doctrinaire version of EBM had given the general idea motivating the movement a terrible name. In other words, the significance of practicing medicine according to a necessarily human and hence limited, perspectival, and fallible interpretation of what the best evidence is and what the implications of that evidence are for the patient should not depend on a particular (sociological) discourse of EBM, which ironically was being pushed by many of the most prominent EBM proponents.

    I think precisely the same kind of reasoning is analogous to EBP. In my own work, which centers on population health, inequities, and the SDOH (using ethics, law/policy, and history rather than social science/epi), I argue that the normative commitment to translate the best social epidemiologic evidence regarding pop health and inequities into policy is compelling. Social justice demands nothing else (thinking here about Sen, Ruger, Pogge, etc.)

    Of course, that evidence, like most evidence for anything, is subject to vigorous debate, and is necessarily uncertain, limited, and less robust than we would like it to be. But isn’t that really a feature of the problem of induction? How could it be any other way? From an ethical perspective, that should not and cannot prevent us from doing the best we can with the best evidence we have — which includes the thorny epistemic problems of deciding what counts as the best evidence — regarding the fundamental causes of health and patterns of disease within human populations.

    Marmot’s work, among many others, of course, is critical in setting the parameters for this engagement, I think, and the reasons why the SDOH evidence base have been poorly translated into public health policy, especially here in the U.S., have everything to do with the bizarre (and I’ll go ahead and say it: pathological) political culture over here.

    Given that culture, I should say here I firmly believe that the kind of translation I am interested in and committed to may be quite impossible in the U.S. (I have been told by very senior federal policymakers what all of the empirical evidence confirms: public health policy in the U.S. is not made based on the best evidence, even in those rare cases where there is consensus on the quality of that evidence and its implications).

    But I can’t see how any of these concerns undermine a fundamental normative commitment to think about ways to translate evidence into policy. To abandon that quest, however limited and arduous it may, is (for me) to abandon any hope of social justice and, more personally, would constitute an epic fail to Aristotle’s great question (what kind of a person do I want to be?).

    Thanks for the stimulating post, and the wonderful blog, BTW.

    • Thanks for such a detailed response Daniel. I agree with you wholeheartedly that research shouldn’t stay in the ivory tower – some sort of engagement is essential. I guess I just haven’t worked out what sort of engagement that should be.

      The crux of the matter is when you say ‘a fundamental normative commitment…to translate evidence into policy’, and then follow this up by talking about ‘social justice’. This is where I came into research as well – but where is the dividing line between providing evidence for policy, and using evidence as a cloak for my own political beliefs? Evidence can’t determine what course of action we should take without taking a value position to it. Given that social science involve a public, critical role, what value positions should we take? I’m with you on wanting to promote social justice through my work, but I’m trying to figure out how to do this in a way that isn’t trying to present social justice as ‘science’.

      And then the other huge issue is around uncertainty – how certain to we have to be before we recommend something? How do we convey uncertainty to others? There is a value judgement in when you decide that something is ‘certain enough’, but I’ve rarely seen this discussed explicitly outside of philosphers’ debates.

      Sorry, this is really a load of questions that your comment stimulated, rather than a focused response! But while I have a lot of respect for Marmot, I still don’t think these (and other) questions have been properly answered, and I’m still trying to make sense of my own role in all this. So continuing these kinds of conversations is really important.

      • Hey Ben,

        Thanks much for the thoughtful response. My own epistemic position is that a commitment evidence-based policy does not admit of the possibility of a value-neutral policy position. I am fairly radical on these matters — at least by American standards — but I reject the idea that social policy can or should be value-free. What kind of human enterprise can be characterized that way?

        Of course, the production, mobilization, and interpretation of complex evidence is necessarily political and politicized. This is what I mean to emphasize by referring to the necessarily perspectival, fallible, and subjective nature of EBP. I see it as an inevitable point of departure rather than a flaw in the process, because to hope that values and normative commitments can be kept out of the process of producing evidence and making policy is pious, not to mention more than a little absurd, IMO.

        To be sure, there are features of a political, value-laden process that are detrimental to public reason and should be mercilessly identified and criticized, but to say this is not to license the invalid conclusion that the political, value-laden nature of the process itself can be avoided.

        Thus, as to social justice, the argument is overtly political and value-laden, and those commitments and beliefs should be centered in any exercise of public reason, whether in the academy, in policy circles, or, ideally, in the exchange between the two cultures.

        Regarding the issue of uncertainty, there’s a nice meta-literature on this, at least in terms of EBM. My own view is that uncertainty is an enormous feature of the EBP process from soup to nuts, but that such uncertainty can be actually incorporated into sound evidence production and policymaking. But because uncertainty, despite general agreement of its significance and its inevitability, is poorly tolerated in the West, IMO, we do not deploy any of the tools and techniques we might have for managing and even building it into best practices.

        In any event, I wholeheartedly agree that these questions have been properly answered, and I too am trying to figure out my own role in all this . . .

  5. Interesting post and comments – thanks! I’ve heard the answer to Alex’s question summarised in one phrase: policy-based evidence. I think I’m on pretty firm ground when I say that any person or institution with power will tend to interpret evidence (particularly as it relates to a system as complicated and tricky to define as a society) in ways that suit their own agenda best? I was intrigued by your mention of New Public Management – have you come across John Seddon and Vanguard?

    • I’ve just checked out the Vanguard site and it looks really interesting, thanks for flagging it – but it sounds like you had a particular point in mind here. Is there something that John Seddon said about the way that particularly management philosophies were naively imported into public sector service work, to disastrous effect?

  6. An inevitable problem with evidence-based practice is the difficulty of interpreting the evidence. As it happens, nowhere is that difficulty better illustrated than in health inequalities research. Throughout the world for several decades vast resources have been devoted to such research. But with negligible exception researchers have relied on one or another common measure of inequality without recognizing the ways that each measure tends to be affected by the overall prevalence of an outcome. Most notably, the rarer an outcome, the greater tends to be the relative difference in experiencing the outcome and the smaller tends to be the relative difference in avoiding it. Thus, as mortality declines, relative differences in mortality tend to increase while relative differences in survival tend to decrease; as procedures like mammography and immunization increase, relative differences in receiving the procedures tend to decrease while relative differences in failing to receive the procedures tend to increase. Absolute differences and odds ratios also tend to be affected by the overall prevalence of an outcome, though in a more complicated way. Roughly, as relatively uncommon outcomes increase in overall prevalence, absolute differences tend to increase; as relatively common outcomes increase in overall prevalence, absolute difference tend to decrease. Differences measured by odds ratios tend to change in the opposite direction of absolute differences.

    About 140 references explaining these patterns in particular settings may be found on the Measuring Health Disparities page of jpscanlan.com and the nuances are explained on the Scanlan’s Rule page of the same site. See especially my (1) Can we actually measure health disparities? Chance 2006:19(2):47-51: http://www.jpscanlan.com/images/Can_We_Actually_Measure_Health_Disparities.pdf (2) Race and mortality. Society 2000;37(2):19-35: http://www.jpscanlan.com/images/Race_and_Mortality.pdf; and (3)The Misinterpretation of Health Inequalities in the United Kingdom, British Society for Populations Studies Conference 2006: http://www.jpscanlan.com/images/BSPS_2006_Complete_Paper.pdf; (4) Measuring Health Inequalities by an Approach Unaffected by the Overall Prevalence of the Outcomes at Issue. Royal Statistical Society Conference 2009: http://www.jpscanlan.com/images/Scanlan_RSS_2009_Presentation.ppt

    As discussed in Section E.7 of the Measuring Health Disparities page, in recent years there has been a growing recognition of these issues, mainly in Europe. But it does not seem that anyone recognizing the patterns by which standard measures of differences between rates tend to be systematically affected by the overall prevalence of an outcome has taken such patterns into consideration in conducting health inequalities research. Until these patterns are taken into account, little will be known about the nature of such inequalities much less about what may cause them to increase or decrease.

  7. Hi Ben,

    Interesting stuff and you are right to raise the question about the utility of EBP. Alex raises an equally pertinent question. Whilst ‘Policy-Based Evidence’ is a good slogan I am not quite so sure that it accurately describes the evidence and policy relationship in most policy areas. I do not have the answers (although this has not stopped me from having my say), but you may be interested in some of Ian Sanderson’s recent work in this area, particularly his notion of ‘Intelligent Policy Making’ http://www.ingentaconnect.com/content/bpl/post/2009/00000057/00000004/art00001 – how far you agree with this will depend on your views towards philosophical pragmatism, interesting reading nonetheless.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.