LSE - Small Logo
LSE - Small Logo

Hamish Robertson

Joanne Travaglia

February 7th, 2018

An emerging iron cage? Understanding the risks of increased use of big data applications in social policy

5 comments | 4 shares

Estimated reading time: 10 minutes

Hamish Robertson

Joanne Travaglia

February 7th, 2018

An emerging iron cage? Understanding the risks of increased use of big data applications in social policy

5 comments | 4 shares

Estimated reading time: 10 minutes

Big data technologies are increasingly being utilised in the field of social policy. Although big data methods and strategies are often preferred as a form of evidence-based policy development, big data techniques do not necessarily guarantee scientific objectivity. Hamish Robertson and Joanne Travaglia discuss concerns about the rapid growth in big data methods being used to inform and shape social policy strategies and practices, and particularly the underlying assumptions such data are used to support and “verify”.


Big data technologies are increasingly being utilised in the field of social policy, including in areas such as policing, crime and justice, education, and social services. Although these sectors often seek to use big data methods and strategies as a form of evidence-based policy development, big data techniques do not of themselves guarantee the production of objective knowledge regimes. The level of caution increases further when we consider that the sectors utilising such emergent technologies have histories of information use and, at times, abuse. A particular risk of big data social policy is the use of new technologies to covertly or overtly, consciously or inadvertently support specific socio-cultural ideologies and practices including existing systems and forms of inequality. Here we discuss some concerns and caveats on the rapid growth in big data methods being used to inform and shape social policy strategies and practices.

Social policy and quantification

Contemporary social policy has emerged in a context deeply influenced by several intellectual and moral positions created by Victorian-era social ideologies, both liberal and conservative. This was a pivotal period of change between traditional, or at least conventional, early-modern science and what is now construed as “science triumphant” and the modernity it supports. In creating this transition, the Victorians influenced how we think about a variety of topics, partially through the introduction and/or modification of engines of knowledge, many of which we now take for granted. Three of the conceptual devices or “engines” which continue to influence social policy include: taxonomies of the social; quantification; and forms of social categorisation.

Where once these engines were entirely analogue in conceptualisation and function, they are now increasingly digital. As with the analogue originals, their emerging digital variants pose a mix of opportunities and risks which require informed and critical negotiation. As policy domains rapidly turn to the digital for the collection, analysis, and storing of data, digital data is increasingly being used to inform, even “evidence”, the veracity of digitally informed policy positions. They are even the tool for achieving successful social policy outcomes. This alone makes it an important focus for analysis but, in several senses, it is less the digitisation of policy that concerns us and more the underlying assumptions such data are used to support and “verify”.

While current positions and debates seem current – and, as with discussions about artificial intelligence (AI), even futuristic – in practice they continue to bear the traces of the social theory framework that the Victorians have bequeathed to us. More particularly, conservative Victorianism persists in social policy practice, including, at times, an implicit acceptance of the supposed immutability if not “naturalism” of social divides and inequalities. These are perpetuated through policy-based institutions such as education, where academic performance is seen as individual but correlates closely with social status. And also the law, where criminality is associated with low social status and highly racialised; while, by way of contrast, white-collar crimes are more frequently considered misdemeanours. The weight of big data methods in social policy falls more heavily on the weak.

Where once a variety of analogue options and independent providers existed, the prevailing concept now is to “break down” information silos and connect the previously unconnected. A risk here is that the digitisation of social policy follows Castells’ information/network society model into an oppressive paradigm. Where once we had the threat of the analogue panopticon with human observers, we may now achieve an automated, digital surveillance network.

What is very clear with current big data applications in a variety of social policy environments is a deep and often unquestioning link to forms of social determinism. Ben Williamson has illustrated just how committed a variety of educational policy theorists and practitioners are to the idea that quantification, including digital quantification, will make educational outcomes able to be determined by policy actions. He points out, in addition, how such methods tend to function in the increasingly privatised social policy environments across the Anglo-American world. The rise and rise of “independent” think tanks and their often highly partisan funding arrangements is only one example of how social policy advocacy works under the neoliberal agenda. “Nudge” strategies for healthy behaviours (don’t smoke but we rely on tobacco taxes) are already in play.

A basic premise of these types of approaches is that quantification as a form of abstraction provides better evidence of what is, and what works. In this logic, it is far better to count data from a hospital, school, or council than observe our complex social institutions in action. Nor is this constrained to the field of policy development and analysis. High-impact academic journals are less likely to publish qualitative research, and some have gone so far as to indicate that qualitative research is an “extremely low priority” (as in the case of The BMJ) as though such little data is somehow perceived as lesser than that generated en masse, whatever the quality of the research design, analysis, or interpretation. Each of these approaches has its limits, and each is influenced by the epistemological, ontological, and ideological stance of the researchers (funders and employing institutions).

 Image credit: André Sanano, via Unsplash (licensed under a CC0 1.0 license).

The digital poorhouse

The risk with big data is that its association with technology – and therefore (assumed) scientific objectivity – means the problematic nature of the paradigm can go unexamined by the creators and consumers of social policies. It is becoming increasingly clear that the fundamental epistemic transition currently underway does not of itself negate past ideological pre-commitments. The change from a largely analogue small data environment to a foundationally digital one has not undermined the pervasive ideologies that the small data paradigm produced and institutionalised. Yet big data provides increased legitimacy and therefore increasingly powerful tools with which to pursue specific social agendas; good, bad, but never indifferent. The evidence base of digitised social policy is, therefore, no guarantee of a fairer or more democratic society.

Virginia Eubanks has coined the phrase “digital poorhouse” to describe this emergent paradigm of extreme digital surveillance of negatively constructed social groups and categories. Automation in digital systems and sub-systems, she argues, is creating a new type of social regulation, one that is (deliberately) unmediated by human compassion or even the consideration of individual circumstances. This is social policy as an iron cage: designed, implemented and monitored via digital mechanisms that can operate in ways that leave them unaccountable to the public, designed with features developed with social regulation foremost in mind. As we have noted elsewhere, this is coercive big data in action.

Where once we had a slow, analogue set of systems and processes leading to the quantification of social data and social policy interventions,  the datafication of ideological positions can now be presented as “objective” social policy interventions. One of the more worrying ideological positions apparent in data-informed social policy is that the “evidence” produced by such methods helps inform, direct, and even coerce “good behaviours” on the part of citizens. The normative nature of what is “good” behaviour usually assumes the values of those doing the social policy interventions and very rarely those at whom those interventions are directed. Good organisational behaviours, for example, can be constructed and supported by policies which are intended to cost providers less, without recognition or mediation of their differential impact on the most vulnerable clients.

Conclusion

The appeal of big data informing evidence-based social policy for governments is obvious. Current government structures, mechanisms, and even, we would argue, foci, are a result of a century or so of ideological shuffling. The rise of claims to act on evidence has been especially popular since the beginning of the Cold War and remains popular under neoliberalism in its various incarnations. Evidence claims support a dogma that purport facts trump beliefs, even when each and every “fact” is predicated on a stance of some description, even if that stance is that it is scientifically acceptable to systematically exclude half the population from what is claimed to be “gold standard research”.

There clearly exist a wide range of ethical concerns and complexities in the social policy domain. The rise of calls for evidence-based social policy should underscore the continued need for ethical inquiry on, and oversight of, social policy design, implementation, evaluation, and outcomes studies. As with so many other areas of society, positioning action on the premise of good intentions is an utterly insufficient position or response. The New Zealand Government has recently released a paper discussing some, though not all, of the risks inherent in big data applications to social policy.

The genuine risk is that the evidence produced through the lens, even more than the mechanisms, of big data used by social policy domains will be elevated ipso facto to the status of “science”. The possibilities inherent in big data may simply resolve, uncritically, to become a highly influential part of the established data orthodoxy in social policy practices in the 21st century. In doing so, the social policy field will risk not just replicating but magnifying the inherent assumptions and inequalities rather than addressing them.

 


Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.


 

Print Friendly, PDF & Email

About the author

Hamish Robertson

Hamish Robertson is a geographer with experience in healthcare including a decade in ageing research as well as more recent disability work. He is currently a Senior Lecturer with the Centre for Health Services Management, University of Technology Sydney. 

Joanne Travaglia

Joanne Travaglia is a medical sociologist and Professor and Chair of Health Services Management at the University of Technology Sydney. Her research addresses critical perspectives on health services management and leadership, with a focus on the impact of patient and clinician vulnerability and diversity on the safety and quality of care.

Posted In: Big data | Data science | Evidence-based policy | Government

5 Comments