LSE - Small Logo
LSE - Small Logo

Helena Vieira

September 4th, 2017

The regulatory future of algorithms

0 comments

Estimated reading time: 5 minutes

Helena Vieira

September 4th, 2017

The regulatory future of algorithms

0 comments

Estimated reading time: 5 minutes

Automated decisions systems are becoming more and more common. Algorithms are used by banks, employers, schools, police and social care agencies. They might help to run institutions more efficiently, but if badly designed they have the potential to bring about significant harm such as discrimination or social marginalisation. Currently, there are expectations that the EU’s data protection reform (the General Data Protection Regulation, or GDPR) could minimise those negative effects. However, many experts and even politicians now call for further actions such as introducing new laws specifically dedicated to algorithms.

Heiko Maas, German Minister of Justice, is one of the politicians calling for laws around algorithms. In his opinion, there is a need for a new law that will prevent so-called automated discrimination and force companies to establish more transparency regarding their use of algorithms. In addition, the minister suggested setting up a special government agency that will investigate the functions and consequences of algorithms.

Automated discrimination

Minister Maas’s concerns are not unfounded. There is an increasing number of reports that automated decisions can bring about negative consequences for individuals. A classic example of this is the system used by US courts to predict the likelihood of a convicted criminal committing another crime in the future, where the arrested person must answer a number of very detailed questions about their life, family, education, and friends. Each answer is assigned a specific score and a specially prepared mathematical model estimates the person’s likelihood to reoffend. This assessment is submitted to the court and may affect the decision on further detention (in due course of the trial) and in some states even affects the extent of the penalty. An investigation conducted by the ProPublica portal found that the whole model is based on many misconceptions and duplicates racist stereotypes. Journalists have proven that black people were judged worse than white, even though they committed the same offences and did not have previous convictions.

Algorithmic bias isn’t a new problem. Accusations of discrimination were directed towards London’s St George’s Hospital Medical School back in the 1980s. In an attempt to manage the recruitment process, the college decided to implement an automated candidate assessment system. This new solution was supposed to be faster and more objective. Unfortunately, as it turned out after time, the system discriminated against women and people with non-European-sounding names. The creator of the model underlying the new admissions process had decided to use data on how admissions staff made decisions in the past. However the problem was that these old decisions were fraught with prejudices: for example, independent of their academic qualifications, women overall were less likely to be considered.

Opacity and threats to rule of law

The negative consequences of the use of algorithms are not limited to discrimination. Problems around transparency were more than evident in the Polish case of profiling the unemployed. Here, an automated system was designed to decide what kind of assistance an unemployed person could receive from the job office. The criteria for such decisions were unclear, and the possibility of appeal was almost nonexistent. Eventually, it took a court cased filed by a civil society organisation to obtain an insight into the scoring rules underlying the profiling system.

From this point of view automated decisions may bring fundamental changes to the principle of rule of law. Citizens as well as democratically elected authorities find it very difficult to understand computer systems. Very often it is hard to asses what type of consequences systems might bring and if there is some risk for fundamental rights. American mathematician and blogger Cathy O’Neil simply calls automated systems ‘weapons of math destruction’. By describing various algorithms – in employment, banking, and academia – O’Neil shows that algorithms govern many areas of today’s world, but instead of bringing more objectivity, they are often just dangerous and unjust.

New solutions needed?

How can we counter and possibly prevent these negative effects of using different mathematical models for automatic decision-making? Here, the GDPR can help. Generally, the GDPR prohibits the use of automated decision-making if it has legal consequences for the person whose data are concerned or if it affects this person in other significant ways.

But there are some broad exceptions:

  1. Algorithmic decision-making is allowed when it is justified by a specific law (e.g. member state or EU law).
  2. It is also allowed after receiving an individual’s explicit consent.
  3. Moreover, it is allowed if it is needed for establishing or obliging a contract.

In the context of these broad exceptions, some safeguards still will be needed to reinforce consumer and citizens’ interests. This is facilitated by the right to obtain human intervention, the right to express one’s point of view and the right to contest the automated decision. However, experts from the Oxford Internet Institute said that these provisions are not sufficient and will not result in effective protections against harmful algorithms. What they propose is to introduce a separate right to receive an explanation how the algorithms work and how a specific decision was made.

Another solution is to create a special regulatory or supervisory agency that would have powers to audit algorithms. One of the advocates of this idea is Ben Shneiderman, professor at the University of Maryland. He compares such a future agency to the National Transportation Safety Board – a US agency that is responsible for investigating civil transportation accidents. In his opinion such institutions should have power to license algorithms, monitor their use and conduct investigations when there are suspicions that the algorithm may have negative consequences. Additionally, Danielle Citron and Frank Pasquale argue that publicly used algorithms should be evaluated regarding their discrimination risks. Automated systems would only be able to operate once evaluated by the independent auditors or appropriate institution.

Of course, these ideas involve many difficulties and questions, such as the very definition of the concept of the algorithm. Nevertheless, it is precisely these propositions that politicians in Europe and in the United States are likely to consider in the nearest future.

♣♣♣

Notes:

  • This blog post appeared originally on the LSE Media Policy Project blog.
  • The post gives the views of its author, not the position of LSE Business Review or the London School of Economics.
  • Featured image credit: 1040, by x6e38, under a CC-BY-2.0 licence
  • Before commenting, please read our  Comment Policy.

Jedrzej Niklas is a research officer at the Department of Media and Communications for the Justice, Equity, and Technology (JET). JET examines the impact of automated computer systems on anti-discrimination advocacy and service organizations in Europe. JET is funded by a grant from the Open Society Foundations. Jedrek also contributes to the Our Data Bodies Project, which is a multi-sited study of the impact of data-driven technologies on marginalized communities in the United States. ODB is funded by the Digital Trust Foundation and supported by New America. Jedrek’s research focuses on the relationship between socio-economic rights, new technologies, and public institutions. Before joining the LSE Jedrzej worked as a legal expert at Panoptykon Foundation, where he was involved in policy and research activities related to surveillance, data protection and e-government.

About the author

Helena Vieira

Posted In: LSE Authors | Technology

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.