LSE - Small Logo
LSE - Small Logo

Blog Administrator

August 15th, 2017

The regulatory future of algorithms

2 comments | 3 shares

Estimated reading time: 5 minutes

Blog Administrator

August 15th, 2017

The regulatory future of algorithms

2 comments | 3 shares

Estimated reading time: 5 minutes

Although they are often used to automate or streamline processes, algorithms are far from being the objective tool that many make them out to be. In this post, Jędrzej Niklas from the LSE looks at the negative effects of algorithms and how policy attempts (or will attempt) to mitigate those effects.

Automated decisions systems are becoming more and more common. Algorithms are used by banks, employers, schools, police and social care agencies. They might help to run institutions more efficiently, but if badly designed they have the potential to bring about significant harm such as discrimination or social marginalization. Currently, there are expectations that the EU’s data protection reform (the General Data Protection Regulation, or GDPR) could minimize those negative effects. However, many experts and even politicians now call for further actions such as introducing new laws specifically dedicated to algorithms.

Heiko Maas, German Minister of Justice, is one of the politicians calling for laws around algorithms. In his opinion, there is a need for a new law that will prevent so-called automated discrimination and force companies to establish more transparency regarding their use of algorithms. In addition, the minister suggested setting up a special government agency that will investigate the functions and consequences of algorithms.

Automated discrimination

Minister Maas’s concerns are not unfounded. There are an increasing number of reports that automated decisions can bring about negative consequences for individuals. A classic example of this is the system used by US courts to predict the likelihood of a convicted criminal committing another crime in the future, where the arrested person must answer a number of very detailed questions about their life, family, education, and friends. Each answer is assigned a specific score and a specially prepared mathematical model estimates the person’s likelihood to reoffend. This assessment is submitted to the court and may affect the decision on further detention (in due course of the trial) and in some states even affects the extent of the penalty. An investigation conducted by the ProPublica portal found that the whole model is based on many misconceptions and duplicates racist stereotypes. Journalists have proven that black people were judged worse than white, even though they committed the same offenses and did not have previous convictions.

Algorithmic bias isn’t a new problem. Accusations of discrimination were directed towards London’s St George’s Hospital Medical School back in the 1980s. In an attempt to manage the recruitment process, the college decided to implement an automated candidate assessment system. This new solution was supposed to be faster and more objective. Unfortunately, as it turned out after time, the system discriminated against women and people with non-European-sounding names. The creator of the model underlying the new admissions process had decided to use data on how admissions staff made decisions in the past. However the problem was that these old decisions were fraught with prejudices: for example, independent of their academic qualifications, women overall were less likely to be considered.

Opacity and threats to rule of law

The negative consequences of the use of algorithms are not limited to discrimination. Problems around transparency were more than evident in the Polish case of profiling the unemployed. Here, an automated system was designed to decide what kind of assistance a unemployed person could receive from the job office. The criteria for such decisions were unclear, and the possibility of appeal was almost nonexistent. Eventually, it took a court cased filed by a civil society organization to obtain an insight into the scoring rules underlying the profiling system.

From this point of view automated decisions may bring fundamental changes to the principle of rule of law. Citizens as well as democratically elected authorities find it very difficult to understand computer systems. Very often it is hard to asses what type of consequences systems might bring and if there is some risk for fundamental rights. American mathematician and blogger Cathy O’Neil simply calls automated systems ‘weapons of math destruction’. By describing various algorithms – in employment, banking, and academia – O’Neil shows that algorithms govern many areas of today’s world, but instead of bringing more objectivity, they are often just dangerous and unjust.

New solutions needed?

How can we counter and possibly prevent these negative effects of using different mathematical models for automatic decision-making? Here, the GDPR can help. Generally, the GDPR prohibits the use of automated decision-making if it has legal consequences for the person whose data are concerned or if it affects this person in other significant ways.

But there are some broad exceptions:

  1. Algorithmic decision-making is allowed when it is justified by a specific law (e.g. member state or EU law).
  2. It is also allowed after receiving an individual’s explicit consent.
  3. Moreover, it is allowed if it is needed for establishing or obliging a contract.

In the context of these broad exceptions, some safeguards still will be needed to reinforce consumer and citizens’ interests. This is facilitated by the right to obtain human intervention, the right to express one’s point of view and the right to contest the automated decision. However, experts from the Oxford Internet Institute said that these provisions are not sufficient and will not result in effective protections against harmful algorithms. What they propose is to introduce a separate right to receive an explanation how the algorithms work and how a specific decision was made.

Another solution is to create a special regulatory or supervisory agency that would have powers to audit algorithms. One of the advocates of this idea is Ben Shneiderman, professor at the University of Maryland. He compares such a future agency to the National Transportation Safety Board – a US agency that is responsible for investigating civil transportation accidents. In his opinion such institutions should have power to license algorithms, monitor their use and conduct investigations when there are suspicions that the algorithm may have negative consequences. Additionally, Danielle Citron and Frank Pasquale argue that publicly used algorithms should be evaluated regarding their discrimination risks. Automated systems would only be able to operate once evaluated by the independent auditors or appropriate institution.

Of course, these ideas involve many difficulties and questions, such as the very definition of the concept of the algorithm. Nevertheless, it is precisely these propositions that politicians in Europe and in the United States are likely to consider in the nearest future.

This post gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

About the author

Blog Administrator

Posted In: Algorithmic Accountability | Data Protection | Digital Inequalities | EU Media Policy | Internet Governance | LSE Media Policy Project | Privacy

2 Comments