LSE - Small Logo
LSE - Small Logo

Lubos Kuklis

Ben Wagner

January 7th, 2020

Disinformation, data verification and social media

1 comment | 23 shares

Estimated reading time: 5 minutes

Lubos Kuklis

Ben Wagner

January 7th, 2020

Disinformation, data verification and social media

1 comment | 23 shares

Estimated reading time: 5 minutes

In order to tackle problems of online disinformation, it is widely agreed that any potential regulator would need more reliable data than is currently available from tech and social media companies about the spread of information on their platforms. Ben Wagner, Assistant Professor at Vienna University, and Lubos Kuklis, chief executive of the Slovak media authority (CBR) and board member of the European Regulators Group for Audiovisual Media Services (ERGA), explain their proposal for the creation of a new single European institution which could act as an auditing intermediary to ensure that the data provided to regulators by social media companies are accurate.

What you don’t know can’t hurt you: this seems to be the current approach for responding to disinformation by public regulators across the world. Nobody is able to say with any degree of certainty what is actually going on. This is in no small part because at present, public regulators don’t have the slightest idea what disinformation actually looks like in practice. We believe that there are very good reasons for the current state of affairs, which stem from a lack of verifiable data available to public institutions. If an election board or a media regulator want to know what types of digital content are being shared in their jurisdiction, they have no effective mechanisms for finding out this information.

It is of course easy to raise privacy concerns in this context. After all, who wouldn’t be concerned about a government regulator having access to all the digital content they are sharing? It should be clear, however, that if done properly, these fears are unfounded. Public regulators do not need access to all digital content to combat disinformation. Nor do they – as some policy proposals have suggested – need to ‘break encryption’ or mandate unencrypted communications on key platforms in order to do so. At the same time, current transparency data provided by online platforms does not stand up to rigorous scrutiny, either by independent academics, media regulators or civil society.

In the same way that the financial services regulator relies on ‘auditing intermediaries’ to ensure the accuracy and veracity of the annual reports of companies, so too should media regulators and election boards be able to rely on auditing intermediaries to ensure that the data they receive are accurate. In which other industry would it be considered reasonable to take the claims of a private company about key financial aspects of its business on face value without independent verification? If we can expect this level of audited scrutiny for financial transactions, why not also for digital content?

There are two ways in which such a model of auditing intermediaries could function: public or private. While both are legitimate approaches to the challenge of auditing intermediaries, due to limited space this article will only develop the approach of a public intermediary further here. What could such an independent public intermediary look like?

The first and most important thing to mention here is that any such public intermediary would need to be highly independent. This has been a challenge in previous iterations of public sector platform regulation, which is part of why an independent agency – preferably at a European level – would be of such high importance. For example, the German ‘Bundesamt für Justiz,’ (BfJ) is entrusted with enforcing the German Network Enforcement Act (NetzDG) which is in turn one of the key current elements of platform regulation in Europe. However, the BfJ is not an independent regulator, rather it is directly attached to the German Ministry of Justice and has to follow the instructions of the Ministry and the politically-appointed Minister of Justice.  As such, a public agency such as the BfJ would not be in a position to conduct this kind of verification.

One stage removed are media regulators, who are themselves independent agencies within the national context. The extent of their independence, however, varies to a considerable degree. And even those that can be considered sufficiently independent are usually not equipped with the capacities or competences for auditing data. Although not inconceivable, it would require substantial restructuring of these institutions in every member state to allow for such an activity.

Finally, there is the case of data protection authorities (DPAs), which are also independent agencies. Through their experience and expertise with data protection impact assessments under the GDPR and their in-house technical skills, they would be well-equipped to conduct these kinds of audits. However, they are already significantly understaffed and underfunded to respond to the GDPR, without having additional burdens for additional tasks placed upon them.

It is clear that the need for verified data transcends one particular regulatory area – be it media regulation, data protection, telecommunications or concerns around intellectual property. If all regulators were given competences and capacities to verify data important for the exercise of their duties individually, it would create redundancies, that may not only be inefficient economically, but could also cause complicated situations potentially leading to mishandling of the data itself.

We thus believe that it is important to create a new single European institution that draws on auditing expertise in the private and public sectors to verify the claims made by social media providers.

Such an institution could be created within the context of the proposed European Digital Services Act (DSA). It should however be a distinct legal entity to safeguard its independence from other institutional actors working in this area. The ability to draw on expertise from the European Court of Auditors, from the European Data Protection Supervisor (EDPS), as well as from the private sector, would be key to enable the effective functioning of this institution.

This institution would be responsible for collecting verified data and making them available only to authorities endowed with the legal competence to use them, to a legally-specified extent for a legally-specified purpose. The collection and verification of the data on the one hand, and their use for regulatory purposes on the other, would therefore be distinct processes, which would further enhance the independence of the institutions involved, and the security of the data in question.

What is not possible at this point is to continue public debates or regulatory policy about the actions of large online platforms based on unverified data. Only if regulators have an accurate picture of what is actually happening on large online platforms, whether regarding disinformation or numerous other public policy issues, can they make accurate determinations of what steps to take. Neither regulators nor the general public should have to rely on the benevolence of online platforms to know what is going on in their own media environments.

This article represents the views of the author, and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured image: Photo by Markus Spiske on Unsplash

About the author

Lubos Kuklis

Lubos Kuklis is a member of the board of the European Regulators Group for Audiovisual Media Services (ERGA) and chief executive at the Council for Broadcasting and Retransmission of Slovakia.

Ben Wagner

Ben Wagner is Assistant Professor and Director of the Privacy & Sustainable Computing Lab at Vienna University of Economics and Business (WU Wien), where his research focuses on technology policy, human rights and accountable information systems. He is an Associate Faculty member at the Complexity Science Hub Vienna, a visiting researcher at the Human Centred Computing Group at the University of Oxford and a member of the Advisory Group of the European Union Agency for Network and Information Security (ENISA).

Posted In: Data Protection | Internet Governance

1 Comments