LSE - Small Logo
LSE - Small Logo

Managing Editor

March 14th, 2013

The current debate about ‘evidence centres’ obscures a broader transformation in how policy is evaluated

4 comments

Estimated reading time: 5 minutes

Managing Editor

March 14th, 2013

The current debate about ‘evidence centres’ obscures a broader transformation in how policy is evaluated

4 comments

Estimated reading time: 5 minutes

wdaviesThe Government was widely praised for its recent commitment to establishing ‘evidence centres’ to evaluate the efficacy of social policy. However Will Davies argues that the enthusiasm for the empirical evaluation of policy outcomes is part of a broader shift from economics to medical research as the gold standard for policy knowledge. He fears that this mentality serves to obscure the inevitable moral and political dimensions to policy decisions, risking the establishment of a  state which gets smarter but also more and more opaque as to whom its interventions are targeted at and why.

Reading the news that the government is to set up new ‘evidence centres‘ to inform, criticise and influence £200bn-worth of public policy, my immediate thought was – why now? New Labour came into power singing the praises of ‘evidence-based policy’ in 1997, and became widely celebrated for its ability to bring new insights from economics, psychology and sociology straight into Downing Street, via the Strategy Unit and academic-wonks such as Geoff Mulgan and David Halpern. Even before then, the relentless auditing and measuring of public services was introduced under the guise of ‘new public management’ (though new research by Christopher Hood and Ruth Dixon shows that this never achieved great savings). And the Treasury’s Green Book has offered a set of techniques for policy evaluation and appraisal for decades.

It seems now that policy is undergoing quite a rapid shift from treating economics as the standard for policy knowledge to treating medical research as the standard. This is clear from the fact that the evidence centre idea arose initially from the proposal of creating a ‘NICE for social policy‘, to find out ‘what works’. Economics itself is undergoing a much slower mutation, from aspiring to the status of physics to aspiring to that of biology. The infrastructructure of government that is needed to facilitate these shifts is quite different and far more extensive.

The key thing here, epistemologically, is that the state is trying to relinquish an a priori account of causality. The methodology of neo-classical economics has a very particular theory of causality, which it packages up within its assumed psychological theory of preference satisfaction. This theory then structures what types of evidence are collected, how they are interpreted, the political uses to which they are put. Inevitably, like any useful scientific framework, it also limits what types of conclusions might be drawn and the types of knowledge that might arise. A scientific framework that presupposed nothing about the world – or in this case, society – wouldn’t be able to make any sense of the world.

Yet the spread of medical epistemology into public policy is strangely anti-theoretical, thanks to a somewhat naively optimistic view of a single technique: the randomised controlled trial (RCT). RCTs operate according to induction. The facts are meant to speak for themselves; the data and the theory are kept neatly and self-consciously separate from each other. A medic, Ben Goldacre, has co-authored a paper on the policy applications of RCTs for the British government, which opens with the line ‘RCTs are the best way of determining whether a policy is working’. Elsewhere, RCTs are often referred to as the ‘gold standard’ for scientific testing, a term that confirms the dangers of metaphorical tourism, given that, while economists are happy to speak biologically of ‘toxicity’ and ‘contagion’ in the financial system, only the crankiest libertarians amongst them would countenance a return to the actual gold standard.

This is supplemented epistemologically by the rise of Big Data, which no doubt is already on the minds of forward-thinking policy experts, especially in the domain of health behaviours. The very character of Big Data is that it is collected with no particular purpose or theory in mind; it arises as a side-effect of other transactions and activities. It is, supposedly, ‘theory neutral’, like RCTs. Indeed the techno-utopians have already argued that it can ultimately replace scientific theory altogether. Hence the facts that arise from big data are somehow self-validating; they simply emerge, thanks to mathematical analysis of patterns, meaning that there is no limit to the number of type of facts that can emerge. There’s almost no need for human cognition at all!

The problem with all of this, politically, is that causal theories and methodologies aren’t simply clumsy ways of interfering with empirical data, but also provide (wittingly or otherwise) standards of valuation. The reason economics has proved so powerful as a governmental tool is not because it is empirically correct about the drivers of human behaviour (no economist since Jevons has really claimed this; sociologists are barking up the wrong tree in this particular indictment) but because it provides a very clear idea of what a ‘good’ or ‘bad’ outcome would look like, in a particular situation. Pragmatically speaking, it saves decision-makers from having to have moral arguments about things, by placing numerical values on them instead.

Health policy is one area where this type of simplification is hardest to achieve, because moral trade-offs are often impossible to keep at bay. From the little I know of NICE and health economics, a great deal of what they wrestle with is not establishing ‘facts’ or ‘evidence’, but coming up with incredibly convoluted ways of agreeing what the standards of evaluation should be in the first place. QALYs are the most obvious example of this, but philosophical arguments about ‘experienced utility’ versus ‘reported utility’ and so on rage on in health economics, because pain, suffering and life itself are at stake, which can’t be easily subjected to an efficiency analysis. As a goal, ‘health’ or ‘wellness’ lacks the finality of ‘efficiency’ or ‘consumer welfare’, meaning that there is a risk that health and wellbeing policies can never truly ‘work’.

It is therefore a little ironic that NICE has become the model for social policy evaluation, given that social policy-makers already have a very clear toolkit for how to evaluate policies for over 40 years (the Green Book), whereas health policy evaluation is in a permanent – and necessary – state of philosophical self-doubt. By adopting the inductivist epistemology associated with RCTs and Big Data, social policy-makers may learn a great deal more about the world, but may also become commensurately less sure of what it even means for a policy to work in the first place.

For those of us with an innate suspicion of government positivists, this should be a good thing. Government might become more humble in its ambitions, cancel policies more readily, recognise the complexity of society. On the other hand, there is a risk that, as with RCTs in psycho-pharmaceuticals, diagnoses of social pathologies might start to spiral. Whole new problematic demographic sub-groups will start to appear to the gaze of the data analyst; new correlations of behavioural problems will be spotted; the perceived sources of our social, psychological and neurological malaises will simply multiply, and we’ll long for an age when it was all just a problem of the wrong ‘incentives’. Tesco’s Club Card is rumoured to produce 18,000 sub-groups of customer; the equivalent for the state would be 18,000 sub-groups of pathological behaviour to be nudged back into line. Without the extreme simplifications of rationalist theories, society would appear too complex to be governed at all. The empiricist response to the government’s paper title, ‘What Works’, might end up being ‘very little’, unless government becomes frighteningly ‘smart’. Alternatively, if theory no longer provides the procedures of evaluation, there is a risk that private backroom politics will do so instead.

I exaggerate, of course. But the political issue is really this. Where clunky-old economics, with its unrealistic models, is used to deliver ‘evidence-based policy’, political and moral debate can be sidelined; yet the standard by which things are being judged is not difficult to discern. There is some publicness about this (I did some interviews with government economists a couple of years ago, and they couldn’t understand why journalists never dipped into departmental policy evaluations, which are all published, and many are quite politically problematic). But where the state becomes a theory-less, inductivist, RCT-ing, data-analytical state, accumulating more and more data to find out ‘what works’, we are entitled to ask what working might actually mean. A clear and transparent utilitarianism, oriented around efficency, may be preferable to a vague and opaque utilitarianism, oriented around some metaphor of systemic ‘wellness’.

Nothing simply works unambiguously in social policy, gold standard or no gold standard. No policy delivers benefits without any ‘side-effects’ (to play along with the game of policy doctors and nurses). A policy might ‘work’ in terms of reducing unemployment, but lead to an increase in family break-down. The inductivists response would be – yes, and that’s precisely the type of pattern that our new evidence centres will detect! So why use the rhetoric of ‘what works’, when it is plain that nothing unambiguously works, at least without also offering the standard (the QALY for social policy, if you like) through which ethical dilemmas and trade-offs will be addressed? If all of this opens up space for non-utilitarian political debate about the multiple, competing purposes of social policies, and the types of procedures and authority that might be used to navigate between them, then that would be welcome. But the alternative is a Tesco Clubcard-state, which gets smarter and smarter, and more and more opaque as to whom its interventions are targeted at and why.

This was originally published on Will’s blog.

Note: This article gives the views of the author, and not the position of the British Politics and Policy blog, nor of the London School of Economics. Please read our comments policy before posting.

About the author

Will Davies is Assistant Professor at the Centre for Interdisciplinary Methodologies, University of Warwick. 

Print Friendly, PDF & Email

About the author

Managing Editor

Posted In: Public Services and the Welfare State

4 Comments

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by British Politics and Policy at LSE is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.