LSE - Small Logo
LSE - Small Logo

Vincent Obia

June 13th, 2023

What can African countries do to regulate artificial intelligence?

0 comments | 17 shares

Estimated reading time: 5 minutes

Vincent Obia

June 13th, 2023

What can African countries do to regulate artificial intelligence?

0 comments | 17 shares

Estimated reading time: 5 minutes

How to effectively regulate new technologies such as general-purpose AI systems is a pressing global challenge. Vincent Obia, who recently received his PhD from Birmingham City University, explains what African countries can do to take a more active role in regulating AI.

Since the beginnings of the internet, Africa has largely been a receiver of new communication technologies and the international regulations that they come with. This was the case with social media, where countries outside the US have had to work with and around platform terms of service that are rooted in American legal structures, as Kate Klonick demonstrates. Developing countries tend to have far less say when it comes to developing, influencing, or contesting these regulations, and for Africa, this essentially translates to operating within rules that are externally set.

A similar pattern is now occurring with the rise of general-purpose artificial intelligence (AI) systems such as the large language model ChatGPT. Africa has again found itself on the receiving end of AI technologies, but this time, the continent can – and should – influence their regulation.

To successfully regulate general-purpose AI, the continent needs to canvass for a bold and inclusive approach to multistakeholder regulation, one that is a departure from the current practice of regulating social media by targeting internet users and the emerging practice of platform governance through co-regulation.

Current Practice: Regulatory Annexation Targeting Internet Users

One approach that falls short with regard to regulating general-purpose AI is the prevailing practice of regulating internet users through laws and other regulatory instruments, which I theorise in my recent article as regulatory annexation. As shown in the figure below, I define regulatory annexation as the extension of standards, principles, and sanctions originally meant for a particular frame of reference to another.

Figure: The Regulatory Annexation Model

In most African countries, an example of regulatory annexation involves using legal and technological measures to address online fake news and hate speech. For instance, I found during my research that at least 33 out of the 54 countries on the continent have legal policies such as laws, registration requirements, and social media taxes or technological instruments such as targeted internet bans. African governments tend to introduce measures like these because of their desire to maintain a similar level of control over internet content as they do over broadcasting, which they heavily regulate.

Examples can be seen in Egypt and Lesotho, where social media users who have a considerable following are registered and regulated as internet broadcasters. Laws or legal proposals in most African countries also reflect a Tanzanian statute that designates internet users as publishers.

These regulatory annexation measures usually target free expression and have not sufficiently tackled problems related to online harms. This is partly because the internet and social media cannot be subject to the regulatory paradigm that applies to traditional broadcast media. And if these measures are inadequate for social media, they will fail outright in the regulation of general-purpose AI.

Emerging Practice: Regulating Online Platforms through Co-Regulation

It appears that some African governments are beginning to realise the limitations of the regulatory annexation model, where users are subject to fines and/or imprisonments. This has necessitated a shift to regulating online platforms through co-regulation, somewhat similar to the European approach codified in the Digital Services Act.

This legislative approach groups online platforms into categories, assigning them various duties of care. In Africa, for example, Nigeria has introduced a 2022 code to regulate computer service platforms. Reports also indicate that Egypt is considering legislation that would require social media platforms to be registered.

What remains to be seen, however, is where African countries lie on the balance of power scale with digital platforms that are mostly domiciled in Western countries. In other words, what influence can African countries wield on these platforms? We know, for instance, that platforms might refuse requests from African countries, as in the Uganda and Google case. A similar pattern is likely to define the regulation of AI. The speed of AI advancement and the power dynamics at play then means emerging African strategies and approaches to AI regulation may only be toothless at best, suggesting the need for a more collegiate response.

Recommended Practice: An Inclusive and Comprehensive Multistakeholder Protocol

What is essentially required to regulate AI effectively is a global multistakeholder system in which Africa should be ready to play an active part. Leaders in the continent should learn from the mistakes they made with social media regulation, where they focused on internet users or uncertain gains from direct platform regulation.

I suggest in my recent article that to deal with the risks and dangers associated with general-purpose AI, countries across the globe need to collectively agree to create, domesticate, and apply a unified systemic regulatory code to any platform or AI-related entity headquartered within their jurisdiction. Leading figures in tech already recognise this and have called for a six-month pause to the training of AI systems to allow governments and other stakeholders across the world develop policies to combat AI risks. Following this was a statement signed by top AI experts who observed that mitigating AI risks should be a global priority.

As altruistic as this may sound, the likely expectation is that the major powers in North America, Europe, and East Asia will be the ones who lead – or rather, dominate – the creation of guidelines on safe AI, a point which computer scientist Stuart Russell alludes to. African countries may not be actively involved in designing these policies, meaning that Africa could be governed by AI systems and regulations in which it has little say. This is already the case with social media and internet governance debates, where few developing countries get to participate and fewer still are able to shape the agenda.

Hence, the need for inclusivity in the development of general-purpose AI regulation, with active contributions from African countries. The African Union should be at the forefront of this campaign. As a start, the AU could strengthen its proposal for a continental strategy and working group on AI to account for the wider risks, not just problems with data flows and protection. Individual African countries such as Mauritius and Tunisia have also begun developing their respective AI strategies, a point which was noted in the 2022 Government AI Readiness Index. But, again, these strategies focus on maximising the (economic) benefits of AI and are lightweight when it comes to regulatory guidelines.

What Africa can do, therefore, is to harmonise its position on the comprehensive regulation of AI and canvass for an international protocol that the continent plays an active role in creating. The window for such an Africa-inclusive protocol may well be closing fast.

This post represents the views of the author and not the position of the Media@LSE blog nor of the London School of Economics and Political Science.

Featured image: Photo by Emiliano Vittoriosi on Unsplash

About the author

Vincent Obia

Dr Vincent Obia completed his PhD in new media regulation at Birmingham City University and teaches at the University of Lagos. His research interests include AI and platform governance and the inequalities between the Global North/South in the use, design, and regulation of new media technologies.

Posted In: Artificial Intelligence | Internet Governance

Leave a Reply

Your email address will not be published. Required fields are marked *