LSE - Small Logo
LSE - Small Logo

Blog Administrator

April 15th, 2016

Transparency has to be open to all and designed with a purpose in mind

0 comments | 1 shares

Estimated reading time: 5 minutes

Blog Administrator

April 15th, 2016

Transparency has to be open to all and designed with a purpose in mind

0 comments | 1 shares

Estimated reading time: 5 minutes

Anstead_238x317Nick Anstead, Assistant Professor in the Department of Media & Communications at LSE, outlines a number of important issues related to the growing role of algorithms – be it in government, banking, information sharing or security – and the impact they are having on wider society.

One of the major problems that became apparent in our conversation was one I had encountered before, especially in my work on social media analysis of public opinion. One of the central challenges we raised in this study was the non-transparency of social media analysis: researchers simply produce their final figures (on, for example, which candidate was most popular on Twitter in a televised election debate), but details of how these figures are arrived at remain elusive.

When we spoke to researchers doing this kind of work, they offered two explanations for this. First, they argued that the algorithms they use contain the actual value in their product. To make them publicly available would be to fatally undermine their business. Second, they noted that the processes they were engaged in were so complex that, even if they were to open up the black box, very few people would understand it.

This second problem was certainly evident in our conversation at the seminar. Even if we did open up the algorithmic black box for all the various process that have an influence on our lives (in terms of making the code available) that will – at best – lead to a very high level debate between a small elite made up of engineers and specialist policy makers. It seems unlikely it could generate much popular debate.

Perhaps a better way of approaching the problem is to return to basic principles, by asking broader questions about what type of society we might want to live in and how we justify those structures. One of the recurring concerns that came up in the seminar was the idea of algorithms producing unfair outcomes. This then raises the broad question of what do we define as being unfair? One possible answer to that question is offered by the liberal philosopher John Rawls, and his idea of the original position.

Rawls’s argument is based on what he terms the veil of ignorance. This is a simple thought experiment: what kind of society would we construct if we did not know what position we were going to occupy in that society? Put another way, what type of inequality would we rationally sanction if there was a possibility that we might be subject to that inequality? To take a simple example, would we design a society based on racial discrimination if there was a possibility that we were going to be a member of the racial group that would be discriminated against?

Rawls’s original position opens up the possibility of a different type of transparency, much less about the complexity of code, and much more about the relationship between the various possible combinations of inputs that go into an black box, and the outputs that generates.

The problem with most algorithmic environments is that we only ever experience them as ourselves. Therefore this model of transparency (an algorithmic veil of ignorance, as it were) would create environments that allow citizens to experience the possible outcomes that different inputs will generate, and play with the black box. This in turn can open up a broader conversation about the fairness of particular variables. For example, we may as a society decide it is wrong to charge people different levels of car insurance based on their gender, but think it is OK to do so based on their having points on their driving licence.

Allowing citizens to see the consequences of these variables in action opens up the possibility of such conversations.

This blog gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science. 

This post was published to coincide with a workshop held in January 2016 by the Media Policy Project, ‘Algorithmic Power and Accountability in Black Box Platforms’. This was the second of a series of workshops organised throughout 2015 and 2016 by the Media Policy Project as part of a grant from the LSE’s Higher Education Innovation Fund (HEIF5). To read a summary of the workshop, please click here.

About the author

Blog Administrator

Posted In: Algorithmic Accountability | LSE Media Policy Project

Leave a Reply

Your email address will not be published. Required fields are marked *