LSE - Small Logo
LSE - Small Logo

James Muldoon

October 8th, 2024

Feeding the Machine: Seven links between AI and inequalities

0 comments | 20 shares

Estimated reading time: 10 minutes

James Muldoon

October 8th, 2024

Feeding the Machine: Seven links between AI and inequalities

0 comments | 20 shares

Estimated reading time: 10 minutes

Behind the smooth surfaces of our tech products and the endless possibilities promised by artificial intelligence (AI) lies a much darker picture – one in which the companies behind these new technologies are implicated in a troubling set of social, economic, political and environmental inequalities, writes James Muldoon.


Big Tech has sold us the illusion that artificial intelligence is a frictionless technology that will bring wealth and prosperity to humanity. Behind this smooth exterior, however, lies the grim reality of a global workforce labouring to make this possible, often under appalling conditions.

Based on hundreds of interviews and thousands of hours of fieldwork spanning more than a decade, my new book Feeding the Machine: The Hidden Human Labour Powering AIwritten with Mark Graham and Callum Cant, exposes the intricate network of organisations that maintain this exploitative system. Here, I set out seven key take-aways from the book as it speaks to live inequalities debates today.

(1) AI requires a hidden army of workers, often working in terrible conditions

Behind the smooth surfaces of our tech products lies the physical labour of millions of workers across the globe. Feeding the Machine is about the hidden human cost of the AI revolution. It’s a story of the rise of AI told from the perspective of the workers who build it. 80% of the work of AI is not done in AI labs by machine learning engineers; it’s data annotation work that is outsourced to workers in the Global South.

The stories we heard when we visited what could be described as digital sweatshops were horrendous: endless days of tedious work on insecure contracts earning little more than $1 an hour with no career prospects. When we buy other consumer products like coffee or chocolate, many of us are aware of the supply chains and manual labour that make this possible. This is not always the case for digital products. But we are directly connected with these workers dispersed across the globe and actions we take as consumers, workers and citizens can make a big difference to their lives.

(2) AI supply chains reflect older colonial patterns of power

In many ways, the book is also a story of the afterlives of the British Empire and the colonial histories that influence how AI systems are produced today. There’s a reason why this work is outsourced to workers in former colonies. They are countries that have experienced harsh histories at the hands of European colonial powers and tend to suffer from underdevelopment and a lack of job opportunities as a result. Western AI companies take advantage of the relative powerlessness of workers in these countries to extract cheap and disciplined labour to build their products.

There is more than just an echo of colonialism in AI – it’s part of its very DNA. AI is produced through an international division of digital labour whereby coordination and marketing is directed by executives in the US while precarious and low-paid work is exported to workers in the Global South. Minerals needed to produce AI infrastructure are also largely mined and processed in former colonial countries. The outputs from generative AI also privilege Western forms of knowledge and reproduce damaging stereotypes and biases found in AI’s datasets.

(3) AI is an ‘extraction machine’ – it feeds off our physical and intellectual work

We often hear stories of AI as a replica or mirror of human intelligence – an attempt to reproduce the basic structure of intelligent thought in a machine. But from the perspective of workers and consumers, it is more accurate to understand AI as an ‘extraction machine’: a system that feeds off the physical and intellectual labour of human beings to produce profits for Big Tech companies.

We argue that the logic of this machine is to extract the inputs of human labour, intelligence, natural resources and capital and convert these into statistical predictions for new tech products. Understanding AI using this machinic metaphor reminds us that, as a machine, AI has its own history, politics and power structures. A machine is not objective or neutral; it’s built by specific people to perform particular tasks. When it comes to AI, the extraction machine is an expression of the interests of wealthy tech investors and is designed to further entrench their position and concentrate their power.

(4) Generative AI is theft

AI requires the manual labour of data annotators and content moderators, but it’s also based on what we call ‘the privatisation of collective intelligence’. The value of generative AI tools is based on original human creative work that has been ripped off and monetised. All of the books, paintings, articles and recordings used to train generative AI models are largely unacknowledged and unremunerated. No consideration has been given to the human creatives whose work is used to create knock-offs and competitors in the creative market.

In the book, we tell the story of an Irish voice actor who finds a synthetic clone of her own voice online, one that has been created without her knowledge and which poses a completely novel threat to her livelihood. Companies have been devaluing the work of artists for generations and many imposters have been trying to make fakes and derivatives. But AI raises the stakes and allows for this to be done with greater ease and at a larger scale than ever before. Generative AI tools enable tech companies to rob an entire creative community of their value and talent, with little to no protection from existing laws.

(5) The commercialisation of AI is leading to a new conglomerate of “Big AI” firms

We need to start talking not just about, “Big Tech”, but “Big AI”. The commercialisation of AI is leading to a further concentration of power in large American tech companies. If you look at the major investors in younger AI startups they are legacy tech companies who want to be seen as leaders in the AI race. This is going to reduce competition in the sector and monopolise decision making power in the hands of a tiny class of Silicon Valley elites.

Big AI firms include leading cloud computing providers such as Microsoft, Amazon, and Alphabet, in addition to AI startups like OpenAI, Anthropic and Mistral, alongside chipmakers such as Nvidia and TSMC. These companies tend to understand AI as a commercial product, one that should be kept secret and used to make profits for investors. OpenAI was started with the goal of developing general artificial intelligence for the good of humanity, but we are increasingly seeing how farcical this is: billions of investment from Microsoft, working with the US military, training data on copyrighted works, imitating Scarlett Johansson’s voice without her consent. There are only a few companies that have the infrastructural power to train foundation AI models and it is these that will benefit the most from the AI revolution.

(6) Generative AI is a catastrophe for the environment

In 2019-2020, all of the leading tech companies promised to make dramatic cuts in their emission, with the goal of being carbon neutral or negative by 2030. Five years on and these pledges are looking increasingly hollow. Microsoft’s emissions have increased by 30% and Google’s have increased by almost 50% as a surge in AI has rapidly increased investment in data centres resulting in a growth in greenhouse gas emissions.

This shouldn’t come as a surprise. Global data centre electricity demand is set to double by 2026. One large data centre consumes as much electricity as 80,000 US households. Cloud computing has a larger emissions profile than the entire airline industry. And it’s the same problem with water: a large data centre can consume roughly between 10 and 20 million litres of water each day, the same as an American town of 50,000 people. Initiatives that put AI in the service of reducing emissions are unlikely to offset this problem as they must also be considered alongside the oil and gas companies that will use AI to extract more fossil fuel.

(7) Redressing the inequalities linked to Big AI requires collective political action

Change will only come about when we work together to force these tech companies to change their practices. We see time and again throughout history that powerful social groups do not give up their position unless they are directly challenged through political struggle. The number one strategy we advocate in the book is for people to come together through workers’ and civil society organisations to build collective power and put pressure on tech companies to provide better conditions and improve the lives of their workers.

There are many things we can all do to contribute to this struggle, but it’s through working together and supporting the struggles of workers in these AI supply chains that we can hope to make the biggest difference. Real social transformation is most likely to occur through a fundamental shift in the balance of power between social groups.

While there are specific policies and proposal we discuss in the book, we want to leave readers with the main point that the issue is primarily a product of the disproportionate power enjoyed by large tech companies and that workers must build their own power to oppose this. It’s only by reimagining how AI is produced in this way that we can turn it into a more emancipatory technology in the service of humanity.

 


 

This post was originally published on James Muldoon’s Does Not Compute newsletter on Substack.

Feeding the Machine, published by Canongate Books, is out now.

Sign up here for a monthly newsletter summary of LSE Inequalities blog posts

All articles posted on this blog give the views of the author(s). They do not represent the position of LSE Inequalities, nor of the London School of Economics and Political Science.

Image credits: AlinStock via Shutterstock.

About the author

James Muldoon

James Muldoon

James Muldoon is Reader in Management at the University of Essex, a Research Associate at the University of Oxford and the Head of Digital Research at the Autonomy think tank. His research examines how modern technologies such as artificial intelligence and digital platforms can create public value and serve the common good.

Posted In: Environment | Global Inequalities | Technology

Leave a Reply

Your email address will not be published. Required fields are marked *