LSE - Small Logo
LSE - Small Logo

Blog Administrator

June 4th, 2019

The problems with Ethiopia’s proposed hate speech and misinformation law

1 comment | 40 shares

Estimated reading time: 5 minutes

Blog Administrator

June 4th, 2019

The problems with Ethiopia’s proposed hate speech and misinformation law

1 comment | 40 shares

Estimated reading time: 5 minutes

In April, the Ethiopian government published a draft law that aims to tackle hate speech and disinformation in the country. Here Halefom H. Abraha, a PhD candidate and Marie Curie research fellow at the University of Malta, analyses the proposed law, which he believes is highly problematic and raises more questions than it answers. 

Regulating problematic online content has become a pressing issue around the world. The ease and speed with which harmful and dangerous content is disseminated and accessed via social media particularly reinforces this challenge. Many countries have or are considering some form of legislation to regulate problematic online content, be it hate speech, fake news, or extremist and terrorist content. Like many other nations, Ethiopia is grappling with the serious and growing problem of hate speech and misinformation.

On the 6 April, the Ethiopian government released a draft of a piece of criminal legislation aimed at tackling the ever-increasing problem of hate speech and disinformation in the country. There is no doubt that hateful speech and disinformation have contributed significantly to the unfolding polarized political climate, ethnic violence and displacement in Ethiopia. It is also understandable that the government is under pressure to act. However, if the draft law enters the statute book in its current format, it may not achieve its stated legislative goals.

The proposed bill is poorly drafted with profound implications for human rights in general and freedom of expression and the right privacy in particular. As the proposed law does not provide any substantive and procedural safeguards, the biggest concern is that the law, once enacted, could be used to censor dissent and limit freedom of expression. The draft law also appears to be politically motivated given the fact that government officials and government-affiliated activists are, at times, accused of disseminating hateful speeches that allegedly helped fuel violence in the country. This piece seeks to highlight the substantive, procedural, institutional, and practical shortcomings of the proposed law.

Substantive problems

The bill creates two vaguely defined categories of criminal offences— ‘hate speech’ and ‘fake news’. According to Article 4 of the proposed law, disseminating hate speech via any medium of communication, including a social media platform, is a punishable offense with a sentence of up to five years imprisonment. Even the ‘possession’ of hate speech content with the intent to disseminate is a criminal act punishable with a prison sentence of up to one year (Art 4(2)). Similarly, disseminating ‘fake news’ by any means, including via social media, is punishable with a maximum prison sentence of three years (Art. 7(4-5)). What follows explains concerns that these provisions may give rise to.

First, the drafters confuse social media with conventional media— that is, broadcasting and printed media. Unlike social media, traditional media is subject to strict centralized regulatory requirements including licensing and editorial control. The actors who participate in conventional media are known, and their degree of responsibility is prescribed in domestic legislation. There is nothing similar in the world of social media. Therefore, putting social media users and conventional media broadcasters on equal footing and then impose similar criminal liability seems to be an overstretch. It is also important to note that conflating traditional media and social media could lead to excessive regulation of the latter.

Second, the draft law fails to clarify who would be responsible if a conventional media house is involved in disseminating fake news or hate speech. Conventional media houses involve many actors (including editors, authors, printers, and publishers), each of whom have a differing degree of control on the content they publish or broadcast, and legal liability. The drafters could have avoided such oversight with a cursory look at current media laws of the country. The drafters also fail to make a distinction between legal and natural persons. This is particularly important given that the law would apply to conventional media and social media alike. As it currently stands, the draft law imposes a similar criminal punishment on an ordinary social media user and a satellite TV provider.

Third, under Article 7(5) of the draft law, if a ‘fake news’ crime is committed by a social media user with five thousand or more followers, it would constitute an aggravating factor. In other words, social media users who have 5000 followers have greater criminal responsibility than those who have 4999 followers, regardless of the consequence of the fake news they publish. This distinction appears to be arbitrary since it focuses on the number of followers instead of the substance the crime committed.

Fourth, it remains unclear whether the criminal responsibility imposed on social media users is limited to those who create the content, or it even covers small acts of participation such as liking, sharing, re-tweeting, and commenting. It is also unclear whether social media account owners are responsible for ‘hate speech’ or ‘fake news’ content posted by their followers, third parties or anonymous users. What does the five thousand-plus followers threshold mean in this case? Should social media account owners with the required number of followers preemptively monitor who says what in their platform to avoid criminal punishment? The draft law says little on such points.

Procedural concerns

In its preamble, the draft law ostensibly recognizes international proportionality principles – that any interference with fundamental rights must be in accordance with the law, necessary to pursue a legitimate aim in a democratic society, and proportionate. Unfortunately, the drafters do not seem to consider procedural and due process issues that would translate these principles into practice. As indicated at the outset, the proposed law does not provide any substantive and procedural safeguards. For instance, it is not clear who is responsible for monitoring and removing illegal online content—be it fake news or hate speech— and under what conditions. It is also unclear whether judicial authorization, which is one of the tenets of the proportionality principle, is required to monitor and remove illegal online content. The principle of due process also requires, among other things, that any interference with human rights to be in accordance with procedures properly enumerated in law, consistently practiced, and available to the general public. Unfortunately, the proposed law does not spell out what rights and remedies social media users would have in times of abuse. For instance, it is not clear whether internet users have any recourse to court or administrative body in case of illegitimate, disproportionate or unjustified actions.

Institutional challenges

If the draft law is promulgated, its implementation would depend, by and large, on institutional capability and expertise with a clear and comprehensive mandate, among others. This is particularly important because the current regulatory framework of online content in Ethiopia is fragmented. However, the institutional responsibility that this draft law envisages could contribute to the existing regulatory problem. As it currently stands, the Ethiopian Broadcasting Agency (EBA) is tasked under the draft law to oversee social media service providers and to raise awareness to control fake news (Art.8 (3-4)). On the other hand, the Ethiopia Human Rights Commission would be responsible for creating awareness to curb hate speech (Art. 8(5)). Even though it is a commendable move that the proposed law incorporates ‘awareness creation’ as one practical measure in countering hate speech and misinformation, this institutional arrangement is problematic.

One concern is that the Broadcasting Agency and Human Rights Commission aren’t obvious institutions for the regulation of social media. The other is that the manner in which responsibility is allocated between the Broadcasting Agency and Human Rights Commission would create regulatory overlap, duplication of efforts and a waste of resources. Furthermore, the drafters completely ignored otherwise relevant institutions such as the Information Network Security Agency. The Agency is tasked to address most matters covered under the proposed legislation both under its establishment law and the Computer Crime law. Finally, it is striking that the drafters also overlooked the most relevant institution to regulate the internet: i.e. the Telecom regulatory authority that is going to be imminently created. Therefore, the proposed law would let the existing regulatory uncertainties, gaps and overlaps continue.

Practical problems

On top of the institutional problems highlighted above, the implementation of the proposed law could face practical challenges. Under Article 8, the draft law requires ‘social media service providers’ to preemptively monitor and prevent the dissemination of fake news and hate speech, and to take ‘expeditious’ action to remove the content upon obtaining a ‘convincing’ notice of complaint. As to who is authorized to give notice, what would constitute ‘expeditious action’ and ‘convincing notice’ are left wanting. It also requires social media service providers to adopt an appropriate policy that would help them fulfill this responsibility. The question, however, remains: how will this provision be enforced? To be specific, the responsibilities imposed on social media service providers are neither legally enforceable nor practically feasible at least for the following reasons:

  • First, the draft law does not impose any punishment on those ‘social media service providers’ who fail to comply with this law.
  • Secondly, there is no so-called ‘social media service provider’ in Ethiopia. Ethiopia as a country is a consumer of foreign-based social media services.
  • Finally, let alone the problematic content produced from abroad, and often with fake or anonymous accounts, Ethiopia simply does not have the leverage – be it economic, political or legal – to properly regulate online content produced within the country and disseminated via foreign-based social media platforms such as Facebook, Twitter, and YouTube.

Closing remarks

There is no doubt that hate speech and fake news are becoming serious problems in Ethiopia and social media certainly plays a significant role. However, blaming social media for the current predicament and trying to solve it through a five-page law obscures the larger problem. Anyone with even passing knowledge of what is happening in the country would easily observe that hate speech and fake news in Ethiopia have now become matter of politics and power, more than a predicament that can be addressed via legislation.

Having said that, while the problem could have been better addressed through other policy options, this does not obviate the need for a carefully formulated and comprehensive law with clear legal mandates and procedures governing the authorities who regulate online content. The proposed law certainly does not fall within that category as it raises more questions than it answers. As highlighted above, the proposed law exhibits substantive, procedural, due process, institutional, and practical shortcomings. Most importantly, the draft law fails to serve the purpose envisaged in its preamble, that ‘any interference with fundamental rights must be in accordance with the law, necessary to pursue a legitimate aim in a democratic society, and proportionate’. Furthermore, the draft law, as it currently stands, contradicts existing laws such as the Computer Crime Proclamation (Art 13-14) and the Broadcasting Service Proclamation (Art 30(4). Taken together, it is safe to conclude that the proposed law exhibits a number of shortcomings and needs serious reconsideration.

This article represents the views of the author, and not the position of the LSE Media Policy Project, nor of the London School of Economics and Political Science. 

About the author

Blog Administrator

Posted In: Filtering and Censorship | LSE Media Policy Project | Truth, Trust and Technology Commission

1 Comments