How should we deal with the sorts of automated predictive systems that increasingly impact our lives? Mireille Hildebrandt of Vrije Universiteit in Brussels argues that we must learn to interact with the new mindless agents that now saturate what she terms “our onlife world”.

Law-as-we-know-it is premised on the fundamental distinction between mind as active, and matter as passive. This distinction no longer holds. Increasingly, artificial, mindless agency – here discussed with particular reference to algorithms – pops up to predict human behaviour (from our buying habits, to health risks, earning capacity, creditworthiness, employability) in order to pre-empt our intent (by offering discounts, policing our behaviours, filtering news items), or to provide us with access to knowledge (search engines). Let me sum up my concerns.

The big data conundrum

Talking about Big Data analytics – the process of making inferences from large data sets containing a variety of data types – can be pretty boring, unless you explore the backend of the systems that thrive on them. This is where the actual operations take place, where data are ‘cleansed’ and ‘parsed’ like the ingredients for the cake we want to eat (and have too, if we can). For an overview of big data, analytics and algorithms, see the cross-disciplinary conversation between computer scientists, lawyers and philosophers in Profiling the European Citizen.

What happens behind the screens and their user-friendly interfaces is what determines the decisions taken by technical systems that depend on Big Data. Decisions about health risks, about creditworthiness, about the likelihood that social security fraud is at stake or decisions about employability or eligibility for certain schools or colleges: all issues that have a direct impact on ordinary people’s lives. Often, the algorithms that run these systems are trained and tested on data sets that help them to learn. Popular discourse now suggests that the more data we feed into these systems, the better. However, contrary to what writers such as Mayer-Schoenberger & Cukier suggest, more data are not necessarily better data, as I have argued previously, along with boyd and Crawford.

In fact, data analytics is all about selection; it is pattern recognition that is critical (as in critical infrastructure). Data cannot speak for itself. It is tweaked into speaking by means of the algorithms used to mine the data. This entails that the selection of data points is relevant: pattern recognition requires to select before you collect, but also to select while you collect, and finally to select after you collect the data. The point is not algorithms in a general sense, but rather which algorithms in particular. And to get that point we need to look into machine learning (ML – in which computers acquire the ability to learn based on supervised or unsupervised algortihms) or even deep learning (DL – a sophisticated form of machine learning based on multi-layered artificial networks). The point is that machines are actually learning. They are learning from the feedback they observe, they are adapting their behaviours, their inner workings, their computational operations – based on this feedback – to improve their performance. For an excellent, accessible introduction to machine learning, see Machines that learn in the wild.

The point – then – is who trains the algorithms and how, based on what ‘performance metric’? The point is, who are capable of evaluating the outcome of these decision systems and how do they do this? Who decides the variables that determine the ‘performance metric’? How can we make their operations visible and contestable, instead of succumbing to the objectivity fallacy or to the complexity argument? See for instance my Meaning and Mining of Legal Text.

Making the backend of all these systems contestable implies that we must participate in designing and engineering the architecture of our onlife worlds. The time when we could leave this to lawyers or politicians is long gone, they have lost their monopoly on architecting the constraints of our lifeworld to data scientists.


Actually, the point is agency. Mindless, distributed, polymorphous, often ingenious or simply effective nonhuman agency, or, incongruous, unresponsive and frustrating nonhuman agency.

Agency refers to much more than analytics or algorithms; it refers to the ability of entities to learn from our behaviour in order to intervene in our world. We – humans – need to put our mind to interacting with the agents ‘we’ made. Instead of using them or them using us (if you are not at the table, you are on the menu), we must learn to consider the artificial agents that ‘people’ our world as worthy of our respect. At the same time, however, we should remember that they have nothing to lose; they cannot feel or suffer – only simulate.

If we dare to face the paradox of interesting new agents that however do not share consciousness, let alone self-consciousness, we may be in for better times. But this will only work if we learn how to anticipate their anticipations. See, by way of example, the DataBait tool that enables us to profile the profilers, without reverse engineering or silly complexifications. Law-wise, see my inaugural lecture on the Rule of Law in Cyberspace.

Kant’s categorical imperative implores us to never use another human being only as an instrument. This suggests that we can – a contrario – use things as mere instruments. I propose that, instead of merely using them – or them merely using us – we must learn to interact with the new mindless agents that saturate our onlife world. This is what I mean by a new animism, a concept I develop in Smart Technologies and the End(s) of Law. Novel Entanglements of Law and Technology, with a nod to Isabelle Stengers ‘reclaiming animism’.

This article gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics.

This post was published to coincide with a workshop held in January 2016 by the Media Policy Project, ‘Algorithmic Power and Accountability in Black Box Platforms’. This was the second of a series of workshops organised throughout 2015 and 2016 by the Media Policy Project as part of a grant from the LSE’s Higher Education Innovation Fund (HEIF5). To read a summary of the workshop, please click here.

Print Friendly, PDF & Email