Recent efforts from the European Commission to tackle online disinformation include incentivising technology platforms and online advertisers to adopt codes of conduct. Nicole Darabian, MSc Media and Communication Governance student (2018/19) at the LSE, reflects on the self-regulatory measures proposed by the Code of Practice on Online Disinformation.
Disinformation – often described as “fake news” – and fears surrounding its usage as a manipulation and propaganda tool in elections has ushered in intense debates on how to effectively contain its spread on online platforms. While various states are launching inquiries at home, the European Commission (EC) carried out its own public consultation from November 2017 to February 2018 to assess the impact of disinformation on citizens and potential areas for regulatory intervention. The interest is not benign – as the European elections approach (May 2019), the Commission is determined to establish safeguards in order to ensure that a “transparent, fair and trustworthy online campaign” is not jeopardized by yet another wave of disinformation and rogue political advertising.
In September 2018, in response to the EC’s Communication ‘Tackling online disinformation: a European Approach,’ social media companies and online advertisers proposed a self-regulatory approach, the Code of Practice on Online Disinformation, that urges signatories to take a more active role in combatting misleading content circulating on online platforms. Broadly speaking, the signatories pledge more transparency in the display of political advertising, more investigative measures to eliminate fake news and bot accounts, as well as further collaboration with the research community to monitor online disinformation. Facebook, Twitter and Google are a few of the big players that have backed the code.
While notable, these efforts are not revolutionizing the governance environment these platforms operate in. In fact, the code establishes a set of voluntary standards providing guidance for signatories to tackle deceptive content, but it does not by any means set grounds for a legally binding agreement or a compliance process. During the recent CPDP conference in Brussels, Anni Hellman, the Deputy Head of Unit of the Media Convergence and Social Media at the EC defending the measure, reiterated that the European institution’s position in combatting online disinformation is firstly a collaborative effort, and not a policing exercise towards tech platforms.
Unsurprisingly, there are doubts about the code’s potential to effectively crackdown online disinformation. One common concern is the code’s impact on freedom of speech, a human right and bedrock of democratic values and European principles. The code does not mention illegal content but rather it defines disinformation as “verifiably false or misleading information” that is “created, presented and disseminated for economic gain or to deceive the public” and “may cause public harm”. Adequately protecting freedom of speech is always a balancing act when introducing new regulation and in this context, critics point out that the semantics may leave room for interpretation – what is considered “harmful”, “false” and from whose point of view? In this sense, tech companies may be compelled to delete or prevent access to content on the idea that the content could be false or misleading. If so, this may result in excessive content regulation potentially restricting freedom of speech. In a growing era of populist sentiment within EU borders, it may not be far-fetched that lawful but less favorable views towards a government may be removed on the basis of online disinformation.
Another criticism raised by researcher Aleksandra Kuczerawy from KU Leuven, who was also present at the conference, focuses on the lack of remedies in addressing the misidentification of disinformation, such as the ability to appeal a decision. The increasing reliance of tech firms on automated systems (i.e algorithms) to monitor and unilaterally suppress content is proving concerning given the relatively new and fallible nature of these verification processes. While human reviewers are part of the solution in following-up on flagged content, it may be unrealistic for an individual to properly monitor all of these reports. As such, introducing an appeal mechanism is seen as an additional safeguard against private entities’ excessive or misguided judgement on what may be deemed disinformation. In support of this new self-regulatory measure, Marisa Jimenez Martín, Facebook’s Director of Public Policy and Deputy Head of EU Affairs, emphasized during the conference the platform’s shared interest to tackle disinformation highlighting steps towards the practice of the code. These include efforts that go beyond targeting “bad behavior” by sanctioning “bad actors” (i.e false accounts) and the roll out of its Third-Party Fact Checking programme that includes partners such as global news agency AFP.
Last but not least, an underlying concern that remains the elephant in the room is the ad-driven business model most platforms depend on which rewards clickbait. In practical terms, this has translated into the tendency of online advertising platforms to finance the dissemination of “fake news”. In this sense, the absence of corporate signatories such as brands and advertisers that can also play an important role in the fight against online disinformation is not going unnoticed. While multiple advertising trade associations have recently announced support for the code, their commitment remains limited to raising awareness amongst their members and promoting the code’s uptake.
In sum, it is too early to assess the results from this new code of practice. However, as the world expects close to 100 elections this year, 2019 might effectively put to the test online platforms’ capacity to honor their commitments in tackling online disinformation. So far, this EC-led industry effort for more transparency is one step in the right direction, but as the commissioner Anni Hellman caveats it, “we hope that it [the code] will work,” but the battle is far from won.
This article gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.