Natali & JorisAhead of the upcoming Information Influx conference in Amsterdam, organized by the Institute for Information Law, Joris van Hoboken of New York University, and Natali Helberger of the University of Amsterdam pose some pressing questions about the role that algorithms play in our everyday mediated lives. They argue that liability and accountability are key concerns in relation to these ubiquitous automatic systems.

The need for a societal debate about the rising influence of algorithmic governance is growing more pressing with the day. Popular online media services increasingly rely on smart algorithms that decide on the basis of large data sets about users and their preferences, the types of contents internet users get to see, or do not get to see. Google’s result pages, Facebook walls and Twitter feeds have become the de facto filters to the online public information environment.

With the help of algorithmic information processing these online platforms shape the public’s view of public life and social affairs. They do so in competition with the websites of online newspapers, which also use algorithms to select and prioritize the presentation of content and references to pre-defined segments of the audience. To an increasing extent, there is also automation with respect to the removal of content and references, once published. Clearly, the proliferation of algorithmic decision-making has altered the public information environment dramatically. It can have many advantages for the media, advertisers as well as users. It also raises concerns that have become an active field of debate and inquiry in media law and policy scholarship.

Who is liable for the automatic?

One issue that continues to be the subject of debate and litigation is that of the liability of the platforms determining our digital horizon through algorithmically shaped processes. To what extent are they responsible for what their algorithmically mediated practices produce when it comes to questions about any content or references that might be illegal or deemed harmful in some jurisdictions, for instance,  or and interference with privacy and reputation?

Trust luck or algorithms to protect your interest?

Trust luck or algorithms to protect your interest?

In the most recent example of the liability issue coming to a head, the EU Court of Justice concluded in Google Spain that those affected by search engine results generated with respect to their person, have a right to request the removal of search results that are incorrect, outdated or otherwise not in compliance with European data protection laws. This widely discussed judgment seems to part with a long-standing tradition to granting safe harbors for internet intermediaries for the sake of the free flow of information and the legal certainty. The judgment’s interpretation and implementation is currently in full swing, and here in particular with regard to the question of how to ensure these newly established obligations do not unduly limit freedom of expression online.

How do we hold the algorithmic gatekeeping to account?

More generally, there is the question of what are the proper frameworks for establishing accountability and ethical standards for algorithmic gatekeeping. The remarkable concentration in data and control over powerful algorithms in the hands of a few profit-oriented companies triggers critical questions about access to culture and diversity. What might be the harmful effects of a lack of competition and the impact on a plural and diverse public debate? More specifically, when looking at prevalent business models and monetization strategies, the question about the protection of editorial integrity in the light of the influence of advertising, ad networks and media analytics has become an important one.

What does it mean when algorithms are primarily directed at the optimization of profit-seeking based on the opportunities provided by these platforms for the organization of our attention, instead of being directed at the optimization of more socially desirable outcomes? What normative frameworks should we use to evaluate algorithmic decision making?

Finally, the pervasive reliance on the user in these algorithmic processes brings the privacy and intellectual freedom of media users into play, as well as the the dangers of discriminatory access to media content. Will it even remain possible to speak of privacy in the context of media use in a meaningful way? What can and should government led media policy contribute to countering the segmentation of the public sphere and the inherent but not always explicit biases of advertisement and content targeting strategies? The rise of algorithmic media poses a variety of fundamental questions to address for information law and the regulation of the online environment, and there is no automatic answer in Google to these normative and practical questions.

Information Influx takes place from 2-4 July 2014 and is organized by the Institute for Information Law at the University of Amsterdam. Both authors will be speaking at a session devoted to the “algorithmic public” along with other leading experts Niva Elkin-Koren, Wolfgang Schulz, and Bernhard Rieder.

This article gives the views of the authors and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics.