The European Parliament recently adopted the revised text of the the Audiovisual Media Services Directive, which governs EU-wide coordination of national legislation on all audiovisual media. Lubos Kuklis, chief executive of the Slovak media authority (CBR) and chair of the European Regulators Group for Audiovisual Media Services (ERGA), discusses what is new about the AVMSD revision, and what it means for the future of freedom of speech.
The New Regulatory Framework
The revision of EU’s Audiovisual Media Services Directive (AVMSD), adopted earlier this month after a legislative journey of more than two years, introduces (among other things) a completely new element to media regulation – the regulatory framework for video-sharing platforms (VSPs). Although built on many concepts from older regulation of TV or, later, video-on-demand services, this set of rules creates a novel regulatory model.
And rightly so, as video-sharing platforms (think YouTube, but also video content on social media like Facebook), where users are doing much of the actual work, don’t have the same strong editorial element to them as the media traditionally covered by the directive, i.e. TV and video-on-demand. The provisions regulating the older types of media are relatively direct when imposing obligations on media providers. But the sheer scale of operations of VSPs, in combination with an active user who is the primary source behind creating, uploading, mixing and commenting on the content, calls for a different approach.
The main difference is not only in the framing of the provisions for this new type of media, with vaguer wordings giving quite a leeway to media providers on how precisely to implement the called-for measures, but also the division of competences between regulators and providers themselves.
Under the new framework for regulation of VSPs, it is the VSP provider who is responsible for creating the terms of conditions for the use of their platforms, and the mechanisms through which the user can file a complaint or the system for the rating/flagging of the content. The regulator, on the other hand, has a more detached role, when compared to older types of media regulation, in which they mainly assess whether mechanisms established by the provider comply with the law.
This approach gives rise to concerns that we are just outsourcing regulation to private companies. Such a delegation of power could certainly run into trouble with the fundamental rights of users of platforms in question, and therefore with basic principles of constitutional democracies.
A thoughtful recent contribution to the debate in this line of thought is this post by Joan Barata from Stanford University. Although I agree with many points in his text, I also think that Joan’s worries may not be entirely warranted. And because I also think that we need an intense discussion on this, I wrote this post in part as an answer to Joan’s arguments.
Regulation vs freedom of speech
Indeed, the delegation of the exercise of regulatory powers to a private entity could be very damaging to freedom of speech and media. But we also need to admit that the most used platforms are providers of digital public spaces where a big portion of society’s conversation is taking place, and in this space, the platforms already are regulators in their own right, but without almost any transparency, and therefore without accountability. And as the outcomes of this arrangement are far from satisfactory, it’s time to find a workable solution.
Again, leaving regulatory powers to a private entity without any public oversight is clearly not the right solution. But this is also not what, in my opinion, the new AVMSD does. Indeed, it obliges the VSP providers to put protective measures in place. It requires them to put certain provisions into terms and conditions, to create a content rating system and also establish procedures for dealing with complaints. Many of these tasks were, and still are, part of the remit of media regulators when dealing with radio, TV and video-on-demand. But in the context of online platforms, this is just not feasible.
So, there is clearly no other way to police the content moderation activities, considering the sheer scale of the operations in question (but there are other considerations as well: technological infrastructure, the active role and a high number of users), then to co-regulate the environment with platforms themselves.
The risks that Joan and others highlight is that the regulatory oversight will be there only to make the rules and their application more severe, and that more takedowns of content would be interpreted as more effective regulation.
This worry is not completely misplaced, of course. Last time the European Commission nudged the providers to do something about hate speech content, it indeed seemed to celebrate more takedowns as an unequivocal success. But without transparency and information about individual cases, you surely can’t say whether the takedowns are really improving the media environment, or the providers are just trying to get rid of any controversial content – or, indeed, the content somebody just happens to be complaining about.
But the oversight doesn’t need to be like this. Certainly not, if it will not singularly push the platforms to clear-up their services, but will also protect the rights of its uploading users. If the overseeing regulator (and of course users) know what the rights of users are and can see what procedures providers put in place for dealing with users’ content, then the evaluation of the effectiveness of regulation may have wholly different meaning. It may not only push the provider to get rid of the bad content, but it can also protect the users from overly strict or arbitrary application of the rules. It may, in other words, create an environment where users are not only protected from the harmful content of other users, but also from overbearing or arbitrary intrusions by the platform itself.
And all of this is legally provided for in AVMSD. It establishes transparency of the procedures among users and platforms. It sets basic standards for the complaints by users and establishes their right to be informed on how their complaint is being handled. It provides for an independent regulator to assess whether these mechanisms established by the provider are appropriate. It sets the option for the user to seek and out-of-court redress if they feel they’ve been mistreated by the platform. And finally, every EU member state has to ensure that users can defend their rights before a court.
So, I think the legal groundwork for protection but also the fair treatment of users is in the directive. Now it depends on the member states to implement it in such a way that this potential will be fulfilled (and the European Commission has a big role in this process).
And this is also why I think that the debate we’re currently having is so important. So, thank you, Joan, for its initiation.
This article gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.