LSE - Small Logo
LSE - Small Logo

Martin Lodge

Andrea Mennicken

May 1st, 2018

Algorithms raise a number of critical issues for regulation

0 comments | 1 shares

Estimated reading time: 5 minutes

Martin Lodge

Andrea Mennicken

May 1st, 2018

Algorithms raise a number of critical issues for regulation

0 comments | 1 shares

Estimated reading time: 5 minutes

The regulation of and by algorithms has become of growing relevance to the delivery of public services, coinciding with the related interest in open and big data. Debates about the consequences of the rise of algorithms have been however limited. Early contributions considered whether the rise of algorithmic regulation and new information technologies represented a fundamental (mostly benevolent) change in opportunities for citizens and states. Others pointed to the likely reinforcement of existing power structures (such as the detecting powers of states), or the rise of new unregulated and private sources of surveillance, and yet others noted the likely complexification effects of the use of computerized algorithms in generating new types of unintended consequences.

What, however, can be understood as ‘algorithmic regulation’? Is there something clearly identifiable and distinct from other types of regulatory control systems that are based on standard-setting (‘directors’), behaviour-modification (‘effectors’) and information-gathering (‘detectors’)?

One distinctive feature is that algorithms can ‘learn’ – and that the codes on which these algorithms are ‘set’ and ‘learn’ are far from transparent. A second component is the supposedly vast computing power in processing information. A third characteristic concerns the enormous ‘storage’ capacity that allows (potentially) for comparison and new knowledge creation. A fourth element might be the insidious nature in which ‘detection’ does take place: users casually consent to highly complex ‘conditions of service’ and are not necessarily in control of the ways in which their ‘profiles’ are being processed and utilized.

Similarly, behaviour modification is said to work by using architecture and ‘nudges’. In other words, one might argue that algorithmic regulation is an extension to existing control systems in terms of their storage and processing capacity; they are qualitatively different in that much of the updating is performed by the algorithm itself, in ways that are non-transparent to the external observer, rather than derived from rule-based programming; and it is distinct in its reliance on observation and default-setting in terms of detecting and effecting behaviours.

At the same time, the notion of decision-making and ‘learning’ by the algorithm itself is certainly problematic. No algorithm is ‘unbiased’ in that the initial default setting matters, and so does the type of information that is available for updating. To maintain ‘neutral’, algorithms might therefore require biased inputs so as to avoid highly undesirable and divisive outcomes. Instead, what is called here ‘by the algorithm itself’ is that the ways in which these algorithms ‘learn’ and what kind of information they process is not necessarily transparent, not even to those who initially established these codes. Understanding the ‘predictions’ of algorithms is inherently problematic: they resemble the multiple forecasting models used by hurricane watchers where one day’s ‘perfect prediction’ might be completely ‘off’ the following day.

In addition, there are a number of critical issues for regulation. Firstly, what is the impact of algorithms on ‘users’ of public services? One might argue that algorithmic regulation brings in new opportunities for users as it generates powerful comparisons that potentially grant users greater choice options on the market (and quasi-market) place than before. Similarly, algorithmic regulation can also be said to increase the potential for ‘voice’: enhanced information can be used for a more powerful engagement with users (e.g. users of public services). The threat of ‘choice’ and ‘voice’ might make providers of services more responsive to users.

At the same time, the fact that simple search results can already have powerful choice-deciding consequences raises questions as to how informed user choice can be obtained in an age of ‘google knowing’ (the unquestioned acceptance of the most prominent search results).

As individual experiences disappear into ‘big data’, engagement is mediated. The lack of transparency about the ways in which user experiences are mediated – and through which means – remains a central part of the debate. Different means of mediating such experiences exist – they might be based on explicit benchmarking and league-tabling (thereby relying on competitive pressures), or on providing differentiated analyses so as to facilitate argumentation and debate, or on enhanced hierarchical oversight. As noted, algorithms are not neutral. They are not just mediation tools, but are of a performative and constitutive nature, potentially enhancing rather than reducing power asymmetries. In short, the regulation by algorithm calls for the regulation of the algorithm in order to address their built-in biases.

Secondly, as regulation via algorithm requires regulation of the algorithm, questions arise as to what type of controls are feasible. In debates about the powers of state surveillance, one argument has been made that the state’s ‘intelligence’ powers are more accountable than those of private corporations. Such a view is controversial, but it raises the question as to how state and non-state actors should be held accountable (i.e. reporting standards potentially backed by sanctions) and transparent (i.e. allow for external scrutiny). Transparency might also increase potential vulnerability to manipulation. Given the transnational nature of much corporate activity, it raises also the question of jurisdiction and the potential effects of national and regional regulatory standards (such as those relating to privacy).

Thirdly, there are questions that concern the kind of regulatory capacities required for the regulation of and by algorithm. Arguably, this is the age of the forensic data analyst and programmer rather than the lawyer and the economist. Altering regulatory capacities in that way may prove challenging in itself. However, it is also likely to be testing as the analytical capacities of the ‘forensic data analyst’ need to be combined with other capacities in terms of delivery, coordination and oversight. It also requires new types of combinations of analytical capacities; for example, when it comes to the regulation of information, it is not just the presentation of particular ‘facts’ that requires monitoring, it is increasingly also their visualization. In the field of energy, it requires, for example, the combination of engineering and data analysis.

Furthermore, there is the question at what point such regulation of the algorithm could and should take place. One central theme in ethical debates has been the default setting – algorithms should not be set to make straightforward ethical choices, but should be programmed so as to make ‘context-dependent’ choices. Such a perspective is problematic as no algorithm can be ‘neutral’. As information can emerge and ‘wiped’ (but not everywhere), and as complex information systems generate new types of vulnerabilities, as information itself can be assessed in remote (non-intrusive) ways, regulatory capacity is required to deal with information in ‘real’ rather than ‘reactive’ time.

An additional central issue for the regulation of algorithms is vulnerability to gaming and corruption. We define ‘gaming’ as the use of bots and other devices to mislead; information flows are generated that might, at first, appear as ‘real’, but, on second sight, reveal that they are generated by artificial means and/ or are inflated so as to provide greater visibility to some ‘information’ than others. This might be related to the use of social media to communicate certain messages, or it might be used to enhance the visibility of certain websites on search engines. In contrast, corruption is the explicit attempt to undermine the functioning of the system rather than its exploitation. This is therefore the world of cyber-security and the protection of critical infrastructures that increasingly operate in the cloud without sufficient protocols to deal with ‘black swans’, let alone, ‘fancy (or cozy) bears’ (Haba, 2017).

In response, it might be argued that regulation by algorithm makes gaming less likely when it comes to oversight. Performance management by target and indicator is widely said to suffer from extensive gaming and manipulation. The power of algorithms to deal with information could be said to enhance the possibilities of regulators to vet information in unpredictable ways, thereby reducing organizational opportunities to game. However, assessing complex organizations via algorithms remains a complex undertaking that does not necessarily enhance the predictive powers of regulatory oversight.

Finally, fundamental ethical questions remain. Artificial intelligence devices can quickly turn racist as they process embedded information and their explicit and implicit biases. This raises issues about the transboundary effects of national (state and non-state) efforts to set standards, and and the differential interests of users – insisting on ‘privacy’ on the one hand, but also demanding ‘ease of use’ on the other. Lastly, it raises the ethical question about the nature of public policy: what kind of expertise should be prioritized?

In sum, the question of how to deal with the regulation of algorithms returns us to the underlying normative position established by Harald Laswell in his call for an interdisciplinary field of ‘policy analysis’, namely the need for a population with knowledge of and in the policymaking process. How, therefore, the regulation of and by algorithm in the area of public service is pursued is of critical importance for the study and practice of risk and regulation.

♣♣♣

Notes:

  • This appeared originally in Risk and Regulation, the magazine of LSE’s Centre for Analysis of Risk and Regulation (carr). As part of its ESRC seminar series on ‘Regulation in Crisis?’, carr, together with King’s College London’s TELOS centre, organized an international workshop on algorithmic regulation in July 2017. 
  • The post gives the views of its authors, not the position of LSE Business Review or the London School of Economics.
  • Featured image credit: Image by markusspiske under a CC0 licence
  • When you leave a comment, you’re agreeing to our Comment Policy.

Martin Lodge is director of the Centre for Analysis of Risk and Regulation (CARR) at LSE. He is also a professor of Political Science and Public Policy. His key research interests are in the areas of executive politics and regulation. He teaches on courses in Public Administration & Public Policy, Public Management, and Regulation. Professor Lodge is co-editor of the journal Public Administration and of a book series on ‘Executive Politics & Governance’ (with Palgrave). He chairs the UK Political Science Association’s specialist group on Executive Politics and Governance and co-chairs the International Political Science Association’s Structure and Organisation of Government research committee. He is part of the TransCrisis research consortium (funded under European Commission’s Horizon2020 programme).

Andrea Mennicken is associate professor in LSE’s Department of Accounting, and deputy director of the Centre for Analysis of Risk and Regulation (carr). Her research focuses on economic transition and transformation (post-Soviet Russia); global accounting and audit regulation; international standardisation; professions; social studies of accounting. 

 

 

About the author

Martin Lodge

Martin Lodge is director of the Centre for Analysis of Risk and Regulation (CARR) at LSE. He is also a professor of Political Science and Public Policy. His key research interests are in the areas of executive politics and regulation. He teaches on courses in Public Administration & Public Policy, Public Management, and Regulation. Professor Lodge is co-editor of the journal Public Administration and of a book series on ‘Executive Politics & Governance’ (with Palgrave). He chairs the UK Political Science Association’s specialist group on Executive Politics and Governance and co-chairs the International Political Science Association’s Structure and Organisation of Government research committee. He is part of the TransCrisis research consortium (funded under European Commission’s Horizon2020 programme).

Andrea Mennicken

Andrea Mennicken is associate professor in LSE's Department of Accounting, and deputy director of the Centre for Analysis of Risk and Regulation (carr). Her research focuses on economic transition and transformation (post-Soviet Russia); global accounting and audit regulation; international standardisation; professions; social studies of accounting.

Posted In: LSE Authors

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.