LSE - Small Logo
LSE - Small Logo

Marta Soprana

February 14th, 2024

Is Artificial Intelligence Racist? The Ethics of AI and the Future of Humanity – review

0 comments | 18 shares

Estimated reading time: 5 minutes

Marta Soprana

February 14th, 2024

Is Artificial Intelligence Racist? The Ethics of AI and the Future of Humanity – review

0 comments | 18 shares

Estimated reading time: 5 minutes

Arshin Adib-Moghaddam‘s Is Artificial Intelligence Racist? The Ethics of AI and the Future of Humanity examines the roots of racism in AI algorithms, tracing them to Enlightenment ideologies. Marta Soprana finds the book a densely-packed and thought-provoking caution on the dystopian consequences of our current trajectory of techno-racism, which we may still have time to avert.

Is Artificial Intelligence Racist? The Ethics of AI and the Future of Humanity. Arshin Adib-Moghaddam. Bloomsbury Academic. 2023.


Since the launch of ChatGPT by Open AI in November 2022, the debate surrounding the impact of artificial intelligence (AI) in our everyday life has intensified. Although many recognise that AI has the potential to generate significant economic and social opportunities, its use raises substantial ethical concerns. Besides the misuse of personal data, particularly worrying are risks associated with the discriminatory outcomes of algorithmic decision-making, frequently reported in the media.

In his book Is Artificial Intelligence Racist? The Ethics of AI and the Future of Humanity, Arshin Adib-Moghaddam – Professor of Global Thought and Comparative Philosophies at SOAS University of London and Fellow of Huges Hall, University of Cambridge – addresses this issue by discussing how and why racism permeates algorithms and what a misogynistic and discriminatory AI means for the future of humanity. His position is clear: current social manifestations of AI are rooted in Enlightenment racism, and if nothing is done about it, the future for society and human security will be under threat. If techno-racism and its underlying “anti-human perfectionism” go unchallenged, he forewarns, we might face a dystopian future where Artificial General Intelligence systems will see humans as inferior and unworthy, threatening human beings’ very existence.

If techno-racism and its underlying “anti-human perfectionism” go unchallenged, [the author] forewarns, we might face a dystopian future where Artificial General Intelligence systems will see humans as inferior and unworthy, threatening human beings’ very existence.

The book’s stated ambition is to “contribute to the supervision of AI systems in accordance with shared ethical standards to ensure our individual human security”. It flags dangerous dilemmas created by AI which humanity never faced before and contextualises it in an historical analysis.

Adib-Moghaddam organises his analysis around five themes, one for each chapter, before concluding with a proposed manifesto for the future of AI. Chapter One (“Beyond Human Robots”) sets the stage for the core argument, as it explains how the widespread racism and bias that permeate today’s algorithms and society find their roots in the Enlightenment, which formalised and legalised a hierarchical system of discrimination between people based on race and gender. Positing that supervising machines and preventing algorithmic biases from destroying equal opportunity is first and foremost a philosophical challenge, the author argues that for AI to develop with human security, justice, and equality in mind we must reappraise the problematic legacies of the Enlightenment and work towards reforming its hierarchical and imperialistic system.

The widespread racism and bias that permeate today’s algorithms and society find their roots in the Enlightenment, which formalised and legalised a hierarchical system of discrimination between people based on race and gender.

Further elaborating on this point, in Chapter Two (“The Matrix Decoded”) he warns against the dangers of techno-utopianism, arguing that the various narratives surrounding the development of AI systems are imbued with ideas of positivism, causalism, and parsimony that help to explain how and why technology facilitates various forms of misogyny and discrimination. In particular, he contends that the controversial social and cultural legacies of the Enlightenment will continue to pollute both the thinking of software developers as well as the datasets feeding into AI systems, “as long as modern racism is accepted as part of our social reality”.

[Adib-Moghaddam] uses the killing of Iranian General Qasem Soleimani by a US air strike to warn of the dangers associated with automated, remote weapon systems and AI technologies

Across the remaining three themes, Adib-Moghaddam reiterates his warning about the dangers of the unsupervised development and usage of AI for humankind, as he describes the profound impact that racist and discriminatory AI technologies can have on society, human rights, international security, and the world order in Chapter Three (‘Capital Punishment), Chapter Four (‘Techno-Imperialism’) and Chapter Five (‘Death Techniques’). For example, he uses the killing of Iranian General Qasem Soleimani by a US air strike to warn of the dangers associated with automated, remote weapon systems and AI technologies, such as lack of accountability, bypassing of international law, and “democratization of death”.

The author concludes his book with a manifesto for the future. Advocating for a “GoodThink” approach, he calls for the decolonisation of AI and for infusing algorithms “with a language of poetic empathy, love, hope and care” in order for this technology to be a constructive rather than destructive force. While Adib-Moghaddam maintains that we are fully equipped to embrace the challenges posed by AI at this “pivotal juncture of our existence as homo sapiens”, he warns that we need to act now in a manner that integrates national and industry-led efforts to promote ethical and trustworthy AI with international UN-led initiatives.

We need to act now in a manner that integrates national and industry-led efforts to promote ethical and trustworthy AI with international UN-led initiatives.

With its philosophical approach to understanding how the past influences AI development and how actions in the present can help change the future of humanity, Is Artificial Intelligence Racist? offers a new and interesting perspective on one of the key questions that permeate today’s debate on the ethics and regulation of AI. In this thought-provoking book, the author strikes a good balance between his harsh assessment of the perils of uncontrolled techno-utopianism rooted in the problematic legacy of the Enlightenment and a somewhat encouraging view that the battle for humanity is not lost if we are able to seize the moment and work together to develop ethical AI systems based on equality and inclusivity.

While its relatively short length and catchy title may appeal to a large audience, Is Artificial Intelligence Racist? is not for everyone. In its less than 150 pages, Adib-Moghaddam packs so much food for thought that readers who are less well versed in philosophical studies and used to a more straightforward and linear argumentation may find this book somewhat difficult to grasp. They may require multiple reads to fully understand the intricacies of the philosophical schools and theories at the basis of the analysis and to digest the book’s core arguments. Still, if one is up to the challenge and wants a book that will make them think, Is Artificial Intelligence Racist? will not disappoint.


This post gives the views of the author, and not the position of the LSE Review of Books blog, or of the London School of Economics and Political Science. The LSE RB blog may receive a small commission if you choose to make a purchase through the above Amazon affiliate link. This is entirely independent of the coverage of the book on LSE Review of Books.

Image Credit: Gorodenkoff on Shutterstock.


Print Friendly, PDF & Email

About the author

Marta Soprana

Dr Marta Soprana is a Fellow in International Political Economy at LSE. She has extensive experience working with international organisations – including FAO, ITC, UNCTAD, UNESCAP, World Bank and WTO – and national governments on trade policy-related projects. Her research interests include digital trade, trade in services, law and technology, with a focus on the relationship between AI governance and international economic law.

Posted In: Book Reviews | Contributions from LSE Staff and Students | LSE Comment | Philosophy and Religion | Science and Tech

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Creative Commons Attribution-NonCommercial-NoDerivs 2.0 UK: England & Wales
This work by LSE Review of Books is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.0 UK: England & Wales.