LSE - Small Logo
LSE - Small Logo

Grace Yuehan Wang

September 28th, 2023

Effective AI governance requires global awareness of local problems

0 comments | 39 shares

Estimated reading time: 5 minutes

Grace Yuehan Wang

September 28th, 2023

Effective AI governance requires global awareness of local problems

0 comments | 39 shares

Estimated reading time: 5 minutes

How to regulate artificial intelligence systems is a significant challenge for governments around the globe. Here, LSE Visiting Fellow Grace Yuehan Wang explains why any attempts at global governance must be attuned to local challenges.

Alongside Hollywood turbulence where screenwriters and actors have been striking (partly) to demand protections around the production studios’ use of artificial intelligence (AI), governments across the globe are busy establishing regulations for AI. In October 2022, the US White House issued a Blueprint for an AI Bill of Rights which, among other things, highlights the importance of data privacy and protection against algorithmic discrimination. In June 2023, the European Parliament adopted a negotiating position on the AI Act which aims to facilitate AI investment and innovation and ensure that AI systems fulfil EU legal requirements and uphold citizen rights. Talks will now proceed with EU Council members on the final form of the Act, with the aim of reaching agreement by the end of 2023. In July 2023, China issued its AI regulation《生成式人工智能服务管理暂行办法》(“Interim Measures for the Administration of Generative Artificial Intelligence Services”) which was officially implemented on 15 August 2023. The Chinese legislation focuses on the regulation of services that use generative AI to create text, pictures, audio, video and other content available to the public within the People’s Republic of China.

The AI regulatory frameworks of powerful major countries and the EU indicate how governments want to utilize AI to achieve benefits and avoid risks. The question is whether any of these approaches is likely to become a global AI governance standard.  If not, is our digital future likely to become increasingly divided and fragmented? This might happen because many countries in the Global South lack access to AI technology and struggle to participate in its governance due to lack of resources. An effective global AI governance framework needs to focus on raising global awareness of the challenges presented by advances in AI systems and offer guidance in tackling local problems deriving from socio-economic and cultural differences as well as distinctive developmental challenges.

Artificial Intelligence – The Unknown Fear

The varying definitions of AI can be traced to efforts to develop intelligent neural networks with the aim of replicating human intelligence. In today’s context, the European Commission defines AI as “systems that display intelligent behavior by analyzing their environment and taking actions – with some degree of autonomy – to achieve specific goals.” The Organization for Economic Cooperation and Development (OECD) describes AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.” According to Algorithm Watch, the core elements of AI are “algorithmically controlled automated decision-making or decision support … in which decisions are initially – partially or completely – delegated to another person or corporate entity”.

AI developments fueled by machine learning, large language models and data analytics show us that AI’s capabilities are expanding to make decisions potentially without human guidance. From a positive perspective, some of AI’s applications benefit society by potentially increasing efficiency and quality in the delivery of goods and services, and enhancing safety when AI is adopted in safety-critical operations such as in the healthcare and transport sectors. For instance, a robotic logistics company, Dorabot, opened branch site in Atlanta, Georgia in partnership with DHL Express, the world’s leading provider of international shipping services, to facilitate DHL Express digital implementation with the aim of increasing productivity and service quality using AI applications. However, the need to regulate AI derives from uncertainty and the risks presented by evolving AI technology that some claim is designed to replicate human intelligence. As psychologist Nicholas Carleton suggests, fear of unknown developments can be a fundamental anxiety.

Key stakeholders in the AI ecosystem include policymakers, academics, industry, civil society and the international community. Governments around the world act both as policymakers and as users of AI. In some countries, governments are supporting national educational systems and research institutes which provide researchers, innovators and entrepreneurs with the materials and data to develop powerful AI applications. The deployment of some of these applications may be prevented in some political contexts. For instance, the human-centric EU AI Act is expected to ban applications and systems that are judged to create an unacceptable risk, such as government-run social scoring of the kind being developed in China.

Global Governance and Regulatory Divides

In view of the competition for global AI leadership and technological supremacy between major powers such as China and the United States, a global AI governance framework is needed that regulates AI use in a way that both addresses practical problems and risks in local contexts and embraces a universal objective of preventing threats to humanity.

The challenges posed by AI vary in the Global South and North, and China is the only major AI player which is viewed as a “Global South” country. Countries in the Global South are facing their own AI governance challenges depending on their distinctive social and cultural profiles. For example, countries in East Asia and Southeast Asia generally do not consider AI system users’ skin colour in their design of facial recognition systems since most of the population in these countries has relatively similar skin colour. At an AI conference I attended in Cape Town in 2022, for example, a Black African government official shared that he failed to pass through an automated facial entry system in an East Asian country because this system was unable to recognise his darker skin. This experience is being replicated in other countries.

Regulating AI globally presents many new challenges especially if a global AI governance framework is to be applied to tackle practical problems in local situations. Some approaches, such as Ryan Calo’s  taxonomy of AI challenges in developed countries, have been suggested. This taxonomy is comprised of five dimensions: 1) justice and equity, 2) use of force, 3) safety and certification, 4) privacy and power and 5) taxation and displacement of labour. Matthew Smith and Sujaya Neupane have proposed a research agenda for studying AI and human development aimed at examining potential AI risks in the Global South including: 1) fairness, bias and accountability, 2) surveillance and loss of privacy, 3) job and tax revenue loss through automation, and 4) undermining democracy and political self-determination. Common to both of these approaches are a concern with 1) justice (equity, fairness), 2) privacy, and 3) taxation and labor (jobs and tax). Among these, there is a critical need for specific local governance responses to address equity and labour issues.

Equity, Global Awareness, and Local Problems

In the Global South, Africa is the only continent where the digital gender gap has widened since 2013, raising a serious challenge to fulfil the 2030 United Nations Sustainable Development Goals. The challenges in Africa include achieving gender equity and cultural and linguistic diversity, and AI applications have the potential to magnify existing inequities, especially when they are multi-layered. For example, digital divides could worsen due to the lack of data representing certain groups because of the form in which data is captured, stored and processed. On the African continent, research has found that 60% of African women own a mobile phone, and only 18% have internet access (compared to 25% of men). Scholars are advocating for Black representations in AI systems in Western countries and they are achieving some successes in pressing for changes in AI system design. Even if issues of representation are addressed in the West, Black African women and those in marginalized communities will still be at risk of being left behind because of their lack of affordable Internet and their absence of representation in AI system training data.

This data blindness is also due to these women’s unemployment and lack of access to bank loans or credit. As a result their data is not collected or is poorly represented. This inhibits the ability of countries to make informed policy decisions aimed at enhancing their wellbeing. This African reality demonstrates that while there might be increasing global awareness of issues of gender and racial equity, each country and region faces distinctive problems related to its social, cultural, political and economic context when it comes to creating a human-centred AI governance framework. If global AI policymakers fail to acknowledge these differences, AI governance will leave the African continent behind in responding to the challenges of AI and contribute to a more fractured digital world.

Issues of language diversity are thornier and potentially more challenging to tackle since many languages in Africa are “low resource languages” due to the weak availability of curated data and scarce research funding for training Natural Language Processing (NLP) systems. Irono Orife and colleagues note that:

               African languages are of high linguistic complexity and variety, with diverse morphologies and phonologies, including lexical and grammatical tonal patterns, and many are practiced within multilingual societies with frequent code switching […]. Because of this complexity, cross-lingual generalization from success in languages like English [is] not guaranteed.

One issue is the complexity involved in training researchers and technologists to develop NLP systems in African languages. This might be overcome in a long run, but the key challenge is to create incentives for supporting the required training. The reality is that global AI players, dominantly American and Chinese technology companies, are intervening simultaneously in all parts of the creative industry cultural expression chain to generate works that maximize cultural goods consumption. If AI-generated content produced by the AI systems developed by these companies starts to dominate, this could lead to unsurpassed global concentration of the creation, production and distribution of cultural goods and services in the United States and China

These AI regulatory frameworks of the global major powers are indicative of their respective governance priorities and how they want to utilize AI both to achieve benefits and avoid risks. If there is no global AI governance standard, our digital future may become more divided, not the least because many countries in the Global South lack access to AI technology and have limited resources to participate in governing AI applications. The Hollywood screenwriters and actors who face the danger of being replaced by AI are living in a digital world that differs substantially from that experienced by unemployed African women who still lack access to the internet. While the global AI ecosystem is taking shape, the international community needs to include discussions about AI regulation that avoids “first world problem” ways of thinking when they work towards a global AI standard. This means attending to the distinctive characteristics of countries and regions in the Global South, and challenging inappropriate data representations and addressing gaps in affordable access to the internet. Without attention to the localization of AI regulation, the divides in our digital future may well be beyond our capacities to govern in ways that address the challenges and risks of AI.

With thanks to Robin Mansell and Wendy Willems for their valuable feedback.

This post represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured image: Photo by Andy Kelly on Unsplash

About the author

Grace Yuehan Wang

Dr. Grace Yuehan Wang is an academic entrepreneur. She is the founder of Network Media, a research-based media consulting organization. She is an appointed visiting fellow in the Department of Media and Communication at the London School of Economics and Political Science. She was a South African National Research Foundation granter holder from 2021-2022. She was the only Asian recipient awarded the Joint Research Funding by Cape Higher Education Consortium (CHEC) / Western Cape Government (WCG) for her research proposal on digital economy in South Africa.

Posted In: Artificial Intelligence | Internet Governance

Leave a Reply

Your email address will not be published. Required fields are marked *