LSE - Small Logo
LSE - Small Logo

Blog Administrator

April 28th, 2016

What’s at Stake in Algorithmic Accountability

0 comments | 1 shares

Estimated reading time: 5 minutes

Blog Administrator

April 28th, 2016

What’s at Stake in Algorithmic Accountability

0 comments | 1 shares

Estimated reading time: 5 minutes

Nick_Couldry-Cropped-221x295

Nick Couldry, Professor of Media, Communications and Social Theory at LSE, explores the challenges for social theory and civic debate in addressing the outcomes of automated systems and decision making.

We live in a time when the contexts of knowledge production are changing fundamentally: the multiple interlinked processes that generate claims to knowledge (from data collection and aggregation to data synthesis and processing) are now, to an unprecedented degree, delegated to forms of automation, in particular algorithmic calculation. The ‘back end’ of knowledge comes to have consequences for the ‘front end’ of knowledge (what institutions and individuals can claim to others) in ways to which we are unaccustomed. Under these conditions, it is essential, as Carolin Gerlitz has argued, for social researchers to reflect intensively on the consequences of this major recontextualisation of claims to knowledge, and this need surely goes wider than those who produce research about the social within universities.  And as researchers, we need to interact with the new boundaries set by such contexts and the productive entities that create them with open eyes, and show a readiness to adapt and learn, as Mireille Hildebrandt emphasised in her blog for the Media Policy Project.

That to me seems uncontroversial, but does not yet get us to the major difficulties which we must confront. Those difficulties emerge as soon as we consider the consequences when such newly contextualised forms of knowledge become embedded in authoritative processes for ordering the world: that is, when data processes become ‘actionable’ (see Louise Amoore’s book The Politics of Possibility). Some examples include the offers, or refusals, of financial credit; the granting or refusal of visas for crossing borders; the offer, or not, of preferential pricing on air tickets; the discriminatory organisation of differentiated forms of social control. What rights should individuals outside those evaluation processes —including of course the data ‘subjects’ of such automated, algorithmic-driven decisions—have to challenge them?

As Frank Pasquale and Joshua Kroll bring out in their provocations for the Media Policy Project, the old norm of transparency (which assumes a link between the provision of shared data/information and consumer protection) is not enough to provide credible protection here and, for a basic reason: we do not know what exactly we need to see to be reassured by automated processes of calculation – or more importantly, nonlinear forms of machine learning. Even more basically, what is it that we see when we are offered something that claims to be a sign of transparency, and how do we establish whether that is what we actually need to see to establish the validity of knowledge production?

Joshua Kroll’s proposal of automated tests for assessing certain types of procedural regularity in algorithmic processing is certainly an important contribution to the debate. But it only deals with questions of procedural regularity and not the other dimensions that are critical to testing the legitimacy claims of any decision-making process: a) were the inputs to that process the right type of input, and were they valid? b) was the framing of the calculation itself satisfactory, and to what particular end? c) were those ends themselves satisfactory and legitimate, given the social or practical ends that the user was trying to achieve?

Frank Pasquale’s provocation goes some way towards satisfying some of these questions, and also highlights the new problem that arises when a large part of the context of knowledge production involves delegation to automation: d) can those responsible for the knowledge process actually give an account of what it is they have done, in order to convince others that it was a rationally-based decision? What if they cannot? Should the internal decision-making process for which they want social acceptance actually be treated as legitimate? What would a social world look like where a significant range of actors cease ex hypothesi to be able to give a satisfying account of what they are doing? How can wider institutional legitimacy be sustained in such a world, and what are the consequences – for consumers or citizens – of such legitimacy not being sustained?

These are the points around which both social theory and civic debate should become very interested as they focus on the practical outcomes of debates on algorithmic accountability. The growing gulf between data protection law in Europe and the US makes transnational comparative research essential here.

This blog gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science. 

This post was published to coincide with a workshop held in January 2016 by the Media Policy Project, ‘Algorithmic Power and Accountability in Black Box Platforms’. This was the second of a series of workshops organised throughout 2015 and 2016 by the Media Policy Project as part of a grant from the LSE’s Higher Education Innovation Fund (HEIF5). To read a summary of the workshop, please click here.

About the author

Blog Administrator

Posted In: Algorithmic Accountability | LSE Media Policy Project

Leave a Reply

Your email address will not be published. Required fields are marked *