Policymakers face a balancing act to ensure the fair distribution of the benefits of artificial intelligence. The rapid adoption of the technology can disrupt nonroutine tasks, affecting job security and exacerbating socio-economic inequalities. Vishal Rana and Bert Verhoeven discuss possible ways to address the problem, such as creating a universal basic income to help people cope with the transition.
The rapid adoption of large language models like ChatGPT and Bard undoubtedly presents many opportunities to positively transform the economy. Organisations are exploring the ability of AI to scale and lower costs, absorb and process enormous amounts of data and help make better decisions, often with humans as editors and facilitators. This transition process is likely to create new jobs that would have never existed without AI.
While the initial statistics suggest a balance between job destruction and job creation, even experts seem to underestimate the exponential pace of developments. This evolution doesn’t come without its challenges. Leaving aside talk about existential threats (for now), we focus on more immediate social economic challenges. Unlike earlier advances in technology, which primarily affected “routine” tasks, AI harbours the potential to automate knowledge workers’ “nonroutine” tasks like policymaking, reviewing, reflecting, coding and doing legal activities.
These developments have exacerbated social tensions, including issues of discrimination and concerns about the functioning of democratic governments. Vast sectors of the workforce are exposed to disruption. Author Yuval Harari illustrates the potential new market power: “Will the rest of the world just become algorithmic data colonies for AI-dominating countries?” Policymakers face a balancing act. They want to cultivate the flourishing of AI while concurrently safeguarding workers and consumers from potential harm. Otherwise, they will need to face the repercussions of a permanent AI divide between a) people embracing and augmenting AI and b) those that are merely replaced, potentially facing long-term joblessness and poverty. Yet, combined with the large data sets to which governments have access, AI has given policymakers new tools to tackle uncertainty and regulate social and economic priorities. As we delve into the dynamics of this pivotal concern, it’s crucial to approach the issue with empathy and a clear understanding of its potential implications for our society.
The capability of AI to automate tasks has stirred major worries about job displacement. AI and automation technologies can outperform humans in a broad array of tasks, from mundane manufacturing procedures to cognitively demanding tasks like data analysis. The real challenge involves the upskilling and reskilling of the workforce to adapt to new emerging roles. Absent adequate resources, individuals from less advantaged socio-economic backgrounds could miss these opportunities, exacerbating the AI divide.
Economic impact and inequality
The AI divide bears extensive economic repercussions, often amplifying pre-existing disparities, with the potential to intensify income inequality. High-skill labourers able to harness AI technologies could experience productivity and wage increases. In stark contrast, displaced low-skill workers may suffer income loss, aggravating wealth imbalances. Geographic discrepancies could also worsen. Regions that act as strongholds of high-tech industries may thrive, whereas those relying on industries prone to automation may face economic downturns. This geographical digital divide may result in divergent economic outcomes, with profound societal implications.
Fundamentally, the AI digital divide is an offshoot of the broader digital divide – the chasm separating those with access to information and communication technology (ICT) and those without. Access to AI technologies necessitates basic digital infrastructure, including internet connectivity and digital literacy. However, many global regions, particularly developing countries and rural areas, lack this essential infrastructure, which limits their capacity to exploit AI advancements. Even with technology access, digital literacy remains crucial. The competence to utilise and understand AI technologies is as vital as physical access, emphasising the need for digital education and training, particularly for marginalised communities.
Concentration of power
AI influences global power dynamics. “AI superpowers” command significant sway over global AI standards and norms, driven by a small number of mammoth tech corporations. This power concentration could exacerbate global inequalities, The additional risk of the AI black box is that it may lead firms—intentionally or not—to violate existing laws about bias, fraud, or antitrust, exposing themselves to legal or financial risk, and inflicting economic harm on workers and consumers. Illustrative of the concentration of power is the recent introduction of plugins for ChatGPT, a form of generative AI. Where people in the United States have widespread access to some of the beta features like “code interpreter” from OpenAI many months in advance, others had to wait their turn. In a similar example, Claude 2, another GPT platform, was only available to the US and UK markets at the time of this writing. Then there are organisations like the Khan Academy, that had access to the capabilities of ChatGPT well in advance of its release and smartly incorporated it into their AI personal tutor, “Khanmigo”.
A meaningful life
Lastly, for many, employment isn’t merely income generation. It also imparts purpose and identity. AI-induced displacement could trigger a loss of meaning, potentially causing large-scale psychological and societal impacts.
While AI has made notable strides in mimicking the efficiency of human intelligence in performing specific tasks, significant limitations remain. AI programs typically offer “specialised” intelligence, meaning they solve only one problem or execute one task at a time. In contrast, humans possess “generalised intelligence,” with problem-solving abilities, abstract thinking, and critical judgment that remain vital in the business sector. Human judgment is pertinent, if not in every task, but certainly throughout every level across all sectors. AI is unlikely to render redundant the workers that augment the technology, at least not in the foreseeable future. But those that cannot move to new meaningful work will struggle to keep up. The central issue should transition from “humans or computers” to “humans and computers” operating in complex systems that enhance industry and prosperity.
Ways to address the problem
The challenges posed by the AI digital divide call for comprehensive solutions emphasising equity, inclusion, and proactive strategies. Governments, businesses, and civil society must confront these issues collectively. Key measures to address these challenges and bridge the divide include:
|Proposals to address the socio-economic challenges of AI|
|Economic redistribution and UBI||Incentives to reduce costs and increase profits and shareholder value can direct companies away from a socially optimal split between automation and the use of technology to help workers. We need to discuss progressive taxation and wealth redistribution to alleviate the intensification of income inequalities and prevent social unrest. A universal basic income (UBI) can counter potential job displacement, ensuring individuals affected by automation can still meet their basic needs and halting further expansion of the AI divide. UBI can be funded by productivity gains in developed countries. The issue is, how can developing economies provide UBI when manufacturing, their main income stream, is likely to be automated back to the West?|
|Upskilling and reskilling||Governments and businesses should invest in training and job services so that employees disrupted by AI can transition effectively to new positions where their skills and experience are most applicable. There needs to be more clarity on what the jobs of the future will be, but a good starting point is to emphasise digital literacy, cognitive skills, creativity, a growth mindset and adaptability. Vulnerable groups and those from lower socio-economic backgrounds must have equal access to these programs.|
|Publicly-funded AI research and procurement||Given their goal of maximising profits, firms are most likely to conduct research and deploy systems that will directly benefit their bottom line. As a result, the development and adoption of AI are likely to diverge from what would be optimal for workers’ wages and employment. This creates the need for government-funded research into AI applications that augment workers (help them do their jobs). Public bodies can also direct the development of AI that complements workers through their procurement of AI systems.|
|Digital infrastructure and literacy||Governments should prioritise the development of digital infrastructure, particularly in underserved regions. Access to affordable and reliable internet connectivity is critical for individuals to reap the benefits of AI advancements. Comprehensive digital literacy and future skills programs should be established to equip individuals with the skills to navigate and effectively use AI.|
By taking decisive action on these fronts, our societies can start navigating the challenges of the digital divide and pave the way for an inclusive and equitable AI-driven future. The time for contemplation has passed; now is the moment for governments, businesses, and civil society to rise to the occasion and shape a future that benefits all.
- This blog post represents the views of its authors, not the position of LSE Business Review or the London School of Economics.
- Featured image provided by Shutterstock
- When you leave a comment, you’re agreeing to our Comment Policy.