LSE - Small Logo
LSE - Small Logo

Blog Administrator

May 28th, 2019

AVMSD and video-sharing platforms regulation: toward a user-oriented solution?

0 comments | 15 shares

Estimated reading time: 5 minutes

Blog Administrator

May 28th, 2019

AVMSD and video-sharing platforms regulation: toward a user-oriented solution?

0 comments | 15 shares

Estimated reading time: 5 minutes

Following more than two years of negotiations, the revised EU Audiovisual Media Services Directive was adopted in November 2018, aiming to better reflect the digital age and create a more level playing field between traditional television and newer on-demand and video-sharing services. Here, Lubos Kuklis, chief executive of the Slovak media authority (CBR) and chair of the European Regulators Group for Audiovisual Media Services (ERGA), continues an exchange with Stanford’s Joan Barata on the implications of the new regulation of video-sharing platforms.

The new Audiovisual Media Services Directive (AVMSD), in force since December 2018, covers video-sharing platforms (VSPs), making this the first time that legislation at a European level is addressing content regulation on any kind of digital platform.

True, the directive only covers audiovisual content, so its impact won’t be directly felt across all social media, and it is still not clear to what extent the scope of this directive will extend to video content on social media platforms like Facebook which host a combination of written posts, images and videos. But with this legislation, I would argue that we are witnessing the birth of a new approach to content regulation. So, it is only advisable that we pay close attention not only to the details of the directive itself, but also to the process of implementing these provisions into national legislation in EU member states, through which this regulation will eventually enter the public space. And indeed, the approach in the AVMSD is not dissimilar to the one recently laid out in the UK Government’s Online Harms White Paper, which is the first comprehensive proposal on the table for content regulation in the digital environment. True, the latter is on a vastly different scale, but in principle, the points of departure are the same, which is one more important reason to take this piece of EU legislation seriously.

This post is the latest installment in a series of exchanges between Joan Barata from Stanford University and me on the regulation of VSPs introduced by the revised AVMSD. You can read the previous posts here, here and here.

In those posts, we discussed how imminent the threat is of over-regulation that might stem from the provisions of the directive regarding VSPs. I distinguished between the singular push of the regulation toward content takedown, as can be witnessed in the German NetzDG law, and the approach that lays down protections for users against over-bearing content removal policies and introduces basic transparency to the processes in question, as potentially introduced in the article 28b of the AVMSD.

In this post, I would like to specifically address three issues. The first is the question of freedom of expression in the digital environment, and whether there actually is such a thing as freedom of expression as a human right for users on social media platforms. The second is a brief analysis of the nature of rights and protections that the Directive provides to users of the VSPs. And finally, the third will be the relationship between regulation based on VSPs’ own terms of service (ToS), and that based on state legislation – an issue that features prominently in Joan’s latest article in the series.

The non-existent rights of users

Critics of enlargement of the scope of AVMSD to VSPs claim that it could, in Joan’s words, “trigger relevant interpretative conflicts and may negatively impact users’ rights, particularly the fundamental right to freedom of expression”.

Alas, users have currently no actual rights on platforms. Everything they can do on the platform is what a given platform allows them to do. The relationship is not even properly contractual, as the contract, in the form of ToS, can basically be changed one-sidedly by the platform at any time.

I don’t want to sound overly pessimistic: this is just how the relationship between platforms and users currently works. If we are talking about rights, we are basically talking about guarantees. And there are no guarantees for users on the platforms — literally none.

So, the relationship is very one-sided, and if we really want to address the problem of freedom of speech online, we should focus on this imbalance.

There have not been many serious attempts to put forward a legal solution to the problem. One exception that deserves attention is offered by David Kaye, special rapporteur for freedom of expression for the UN. In his June 2018 report, he specifically calls for a human rights-oriented approach to content regulation on social media. But he too focuses more on the actual exercise of speech than on its legal protection.

These are not the same thing, however. Legally speaking, freedom of speech is not what you can actually say right now, but to what extent the legal system protects you when you say it. Benevolent autocrats may allow their subjects to speak freely, but if they can change their minds at any moment and punish whomever they like for whatever they have said in the past, then the subjects don’t have freedom of speech in any meaningful sense.

And in principle, a similar situation currently exists on social media platforms. True, they can’t punish you by locking you up, but they can take down your words and thoughts, or ultimately, expel you from their digital premises altogether. It is their decision alone. And as social media platforms are not particularly known for their ability to protect their users’ interests, let alone their human rights, the only real solution is to recognise platforms as spaces of human rights’ jurisdiction. And this is something that should equally bind platforms as well as state governments.

The threat of overbearing regulation

The obvious threat to users in governments’ attempts to protect users on the platforms from harmful content is that, if too zealously enforced, platforms would take down more content than they otherwise would. And I agree that this is a risk.

But what if these rules are not too zealously enforced, or, amazingly enough, even include transparency about rules and procedures for content that is allowed/not allowed on the platform? Wouldn’t it be better than what a user is experiencing now?

Because this is what the AVMSD is introducing. It is the first and, as far as I am aware, the only legislation that lays down rules for transparency for decisions taken by the platform providers. It includes a requirement that the measures taken by the platform are proportionate vis-á-vis the interests of the users. It also includes a legal background for redress if users feel mistreated by the platforms.

True, all of this is only for VSPs and only in rudimentary form. But as a first step, it should be welcomed.

Similar considerations are included in the UK Government’s White Paper on Online Harms, although clearly, much more thought went into harm prevention mechanisms than those aimed at protection of individual freedoms of the users.

VSPs regulation in AVMSD

To be absolutely clear, the directive does not create a “duty of care” or any other general responsibility of VSPs vis-á-vis their users. Nor does it create any actual substantive rights worth that name. It only lists certain measures that a VSP is obliged to put in place to protect minors from harmful content, and to protect the general public from hate speech and criminal content (terrorism, child pornography and racist or xenophobic content are specifically mentioned), and to apply certain rules on advertising on their platforms.

Therefore, even if the directive recognises the right of the users to a court review, it is hard to imagine what kind of right infringement a user can invoke before a court. Of course, in actual implementation, this may change, and EU member states may indeed create real, enforceable rights. But most likely, it will mainly be regulatory bodies, which will be enforcing these provisions, and, as Joan rightly points out, the extent to which they will do so.

What eludes me, however, is how this solution is worse than leaving the users entirely under the power of the platforms, with no real rights, no enforceable protections and no guarantees for fair treatment.  There is no shortage of examples when takedowns from platforms were hard to understand or outright absurd, with no available recourse for users involved. And there is no shortage of examples when users legitimately asked for protection from abuse on the digital platform with no help provided.

Indeed, we may have a question or two about those regulatory systems that nominally function under a democratic constitution, although, in reality, may be just executing orders of not-so-democratic governments. But it is here, again, where we need to stress the importance of backstop guarantees for users against excessive intrusions, be it from platforms themselves or state actors, including (nominally) independent regulators. If we’re serious about protecting freedom of speech (and other rights) online, we need to protect it from all sides.

The distinction between ToS and legal requirements

Joan claims that it will be hard to distinguish which decisions taken by platforms are legally based on their own ToS, and which are based on state-imposed regulatory grounds. But this is not the first time that the statutory regulation is shaping, or indeed limiting, contractual arrangements. Provisions like these are part of many other regulated environments, from telecoms to banking.

The crucial thing, however, is that limits for contractual arrangements are properly balanced. In social media platforms, to put it narrowly, two kinds of relationship emerge: those between the platform and their users, and those between users themselves. Older kinds of regulation, like the German NetzDG, were concerned only with regulation of the relationship between users and the platform, and only in one direction.

So, there was, and is, only a singular regulatory push toward removing of harmful content. If these rules are stated too vaguely, and if there are serious consequences for platforms for not complying with the regulation, it might indeed lead to more takedowns than without such regulation, as platforms seek to stay on the safe side of the law. That could obviously have negative consequences for users’ freedom of expression.

But if there are clear legal limits to what the platform provider can legislate contractually – and again, it is not something we haven’t seen yet – then the regulation is creating a framework within which the provider can legally operate. In areas not covered by the regulation, it can operate under the contractual framework. In areas that fall within the scope of the law, the provider is legally responsible for applying it.

This should mean that the provider will be limited by both what he is obliged to do, and what he has to refrain from doing. So, not only will there be a duty to take down certain kind of content, but there will also be an obligation to leave legitimate content online. Now, within such a framework, there should be no incentive to overreact. On the contrary, the companies should be motivated to strike a proper balance between various competing rights and interests. And this is precisely what an approach focused on human rights should achieve.

Of course, in the context of social media platforms, you can’t conceivably foresee every situation that might occur, nor you can expect to cover every single case of wrongdoing every time. So the regulatory approach, as it is also anticipated by the AVMSD, should be more systemic, rather than focusing on each single case. It should aim to ensure that adequate measures are in place, that their application is proportionate, and that users are transparently informed about both their existence and underlying procedures.

The devil, as always, will be in the detail, and as always, there will be borderline cases. But such an arrangement would give the user more power, not less. And it would make the platforms more transparent and accountable. Now, we have to ensure that the overseeing authority is also transparent and accountable, as well as competent and reasonable.

It is worth noting that regulation of VSPs is not the only novel thing in the directive. For the first time, it also recognizes the material and functional independence of regulators from the governments. Furthermore, there is a very strong push toward co-regulatory solutions throughout the directive, but particularly in provisions aiming at VSPs. So the directive also lays the groundwork for what might, in the end, be more effective and independent regulatory arrangements.

You often hear that the regulation of social media can’t be left to governments. But you also often hear, sometimes from the same direction, that it can’t be left to social media platforms either. Broadly, I agree. To work effectively, regulation must involve both.

This article represents the views of the author, and not the position of the LSE Media Policy Project, nor of the London School of Economics and Political Science. 

About the author

Blog Administrator

Posted In: EU Media Policy | Internet regulation | LSE Media Policy Project

Leave a Reply

Your email address will not be published. Required fields are marked *