LSE - Small Logo
LSE - Small Logo

Daan Kolkman

July 24th, 2020

Is public accountability possible in algorithmic policymaking? The case for a public watchdog

1 comment | 56 shares

Estimated reading time: 7 minutes

Daan Kolkman

July 24th, 2020

Is public accountability possible in algorithmic policymaking? The case for a public watchdog

1 comment | 56 shares

Estimated reading time: 7 minutes

Despite algorithms becoming an increasingly important tool for policymakers, little is known about how they are used in practice and how they work, even amongst the experts tasked with using them. Drawing on research into the use of algorithmic models in the UK and Dutch governments, Daan Kolkman argues that the inherent complexity of algorithms renders attempts to make them transparent difficult and that to achieve public accountability for the role they play in society a dedicated watchdog is required.


Political decision making is increasingly deferred to algorithms, but surprisingly little is known about how these quantifications are used in practice. Over the course of my PhD, I spent two and a half years talking to and working with officials who build and use algorithms in the UK and the Netherlands governments. I found that many of the advantages attributed to the use of algorithms often failed to materialise in policy making contexts and that the transparency of algorithms to non-experts is at best problematic and at worst unattainable. For algorithmic accountability to be possible and for the public to be accurately informed how algorithms shape their lives, requires a critical audience; this can be realised in the form of a watchdog. 

algorithms present an attractive alternative to biased, subjective, and otherwise flawed decision making by humans. Yet, there have been several high-profile cases that question the authority of the algorithm.

Algorithms are ubiquitous. They are embedded in products and services we use daily and increasingly help inform public and private sector decision making. The rise of the algorithm is not surprising considering the two types benefits that are associated with it. Firstly, owing to their superior capacity to process large amounts of information rapidly and consistently, even faulty algorithms may outperform people at a number of tasks. Secondly, algorithms can have broader communicative and organisational benefits, such as placing issues on the political agenda, forcing stakeholders to be explicit about their assumptions, and structuring debates. 

As such algorithms present an attractive alternative to biased, subjective, and otherwise flawed decision making by humans. Yet, there have been several high-profile cases that question the authority of the algorithm. Notably, the role algorithms have played in creating filter bubbles, bias in recidivism models, or the dubious role of algorithms in the exploitation of platform workers. Such incidents have led to calls for algorithmic accountability from academia and government.

Transparency is considered a requisite to algorithmic accountability. The General Data Protection Regulation (GDPR), for instance, provides a “right to explanation” in the context of automated decision making. However, it remains very unclear what this entails in practice. For instance, academics working in fields such as environmental modelling and integrated assessment have spent decades developing methods and guidelines to explain their models, but this has not prevented major incidents.

Similar efforts to develop explainable artificial intelligence (XAI) or responsible data science that offer new tools of explaining algorithms are thus unlikely to have the desired effect. The issue is exacerbated by the increasing complexity of algorithms, the speed at which new techniques are developed, and innovations in software development. While earlier statistical models were subjected to considerable (academic) scrutiny, novel machine learning algorithms can move from development to production within days. 

Setting out on this research eight years ago, I was surprised to find there was only limited empirical work and understanding of how algorithms are used to inform organisational decision making. In many ways the context was similar to that of the early laboratory studies of the 1970’s and 1980’s, on which I drew for my own research, as the many of the aspects of algorithmic policymaking remained ‘black boxed’ and under-examined.

To begin to understand how these algorithms were used in policy context, I took as case studies eight algorithmic models used in the government of the United Kingdom and the Netherlands. These algorithmic models are a subset of algorithms used in policy making. They are formal representations of some target or policy area implemented in code. Over the course of two and a half years, I collected data in the form of interviews, observation notes, and documentation for eight models that were being used for purposes varying from policy simulation to forecasting. Notable examples include the Pensim2 model used by the Department of Work and Pensions (DWP) to do scenario analyses for pensions policy and the SAFFIERII model used by the Netherlands Bureau for Economic Policy Analysis for macro-economic scenario analysis and forecasting.

Rather than seeking out transparency for its own sake, efforts towards algorithmic accountability would be better served by exploring ways to institutionalise the review and scrutiny of algorithms

When I went through the material, I found that the much hyped benefits of algorithms, were less frequently realised in practice. For instance, the algorithmic models used in policy making were not necessarily very accurate. The algorithmic models covered policy areas with long time horizons, as a result they inevitably became prone to fundamental uncertainties. Regardless, the people working with these models found them useful, as one practitioner candidly remarked:

On the whole, everyone sort of suspends their disbelief to some extent around how precise [the model] is. It allows the process to then happen in a much better way.” 

This motivated me to shift my attention from what makes algorithmic models useful, towards what makes them credible. How does ‘everyone sort of suspend their disbelief’ and agrees that the algorithmic model is credible, despite inherent uncertainties and perceived shortcomings? The conditions under which algorithmic models are considered credible may differ per case and may vary per person. Experts with statistical training often assess the credibility of an algorithmic model by using one or more tools that allow them to look inside the black box. One example is sensitivity analysis – a technique that varies the inputs of the model to see how the outputs change. However, even such experts were not always confident they understand the algorithm in its entirety:

How the regression equations are derived? I don’t know a great deal about how they are derived. I’ve seen a lot of the spreadsheets, the workbooks and know how they are working, taking whatever variable and applying this parameter, but then I have no idea how those parameters were developed in the first place.”

In practice, algorithms may not be fully transparent, not even to those who work with them on a daily basis. This lack of transparency does not impede the use of algorithms and efforts to make algorithms transparent to everyone could be misdirected. If experts struggle to comprehend what is going on in a relatively simple algorithmic model, how is the public supposed to understand – or care – how a machine learning algorithm arrived at a particular decision?

Rather than seeking out transparency for its own sake, efforts towards algorithmic accountability would be better served by exploring ways to institutionalise the review and scrutiny of algorithms. Tools, guidelines, and best practices to improve transparency are not sufficient and do not prevent incidents. Effective policing by an algorithm watchdog may be the only way to increase algorithmic accountability. While such policing is likely to meet with its own share of issues (e.g. high costs, proprietary algorithms), it should be the standard for all algorithms used in government. 

 


This post draws on the author’s article, The (in)credibility of algorithmic models to non-experts, published in Information, Communication & Society.

Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below

Image Credit: Markus Spiske via Unsplash.


Print Friendly, PDF & Email

About the author

Daan Kolkman

Dr. Daan Kolkman is a research fellow in computational sociology at the Technical University of Eindhoven and the Jheronimus Academy of Data Science. His research revolves around the sociology of quantification, intelligent decision support decision systems (e.g. machine learning, artificial intelligence), and organizational decision making. Daan received his PhD in sociology from the University of Surrey (England) for his work on computational models in government.

Posted In: AI Data and Society | Evidence for Policy

1 Comments