LSE - Small Logo
LSE - Small Logo

Richard Brown

April 1st, 2023

What ChatGPT and rapidly advancing AI could mean for working life and skills

1 comment | 42 shares

Estimated reading time: 8 minutes

Richard Brown

April 1st, 2023

What ChatGPT and rapidly advancing AI could mean for working life and skills

1 comment | 42 shares

Estimated reading time: 8 minutes

Innovation in large language models (LLMs) such as ChatGPT is advancing at a fast pace, making it hard to have an idea of how they and advanced AI in general will affect the skills that we will need in the future. Richard Brown analyses the impacts of these technological tools on the knowledge economy, and discusses the many challenges facing us, not least of which is the need for sophisticated moral judgement.

I’d like to say that I asked ChatGPT to write me a first draft of this blog, but a) it’s a tiresome cliché, and b) the platform was overloaded when I started writing, so I couldn’t. I’m not surprised. Even over the past couple of months, talk about and use of large language models (LLMs) such as ChatGPT and Bing seems to have been growing exponentially. LLMs will render essay-writing at universities obsolete, hugely accelerate the production of first drafts, and automate the drudge work of academic research.

I am undertaking research on the skills that we will need in the future, and it feels difficult to get a handle on how LLMs and their artificial intelligence (AI) successors will affect these, given the speed at which innovation is advancing and use cases are multiplying. But it also feels careless going on negligent not to do so. So, what might it mean to work with this rough beast, as it slouches towards our workplaces?

Robert Reich’s The Work of Nations

AI will, I think, transform what we currently call the ‘knowledge economy’. Thinking about this sent me back to Robert Reich’s The Work of Nations, and its analysis of the ‘three jobs of the future’. ‘Routine production’ jobs, he wrote, were poorly valued jobs in everything from manufacturing to book-keeping, often moved overseas when he was writing, but also increasingly vulnerable to automation. Many of Reich’s second category, ‘in-person service’ jobs, are less vulnerable to moving overseas (although many are still low-valued by society): even if some shopping has gone on-line, there are still jobs – from brain surgeon to hairdresser, and from bartender to care assistant – that are defined by the need for proximity. The third category, Reich slightly awkwardly describes as ‘symbolic analysts’, comprising everyone from consultants, software engineers and investment bankers, to journalists, TV and film producers, and university professors. These are the elite tier of the global knowledge economy:

“Symbolic analysts solve, identify and broker problems by manipulating symbols. They simplify reality into abstract images that can be re-arranged, juggled, experimented with, communicated to other specialists, and then, eventually, transformed back into reality… Some of these manipulations reveal how to deploy resources or shift financial assets more efficiently, or otherwise save time and energy. Other manipulations yield new inventions – technological marvels, innovative legal arguments, new advertising ploys for convincing people that certain amusements have become life necessities.”

Reich was writing 30 years ago. Since then, the offshoring and automation of routine production has gathered pace, while the rewards accruing to symbolic analyst jobs have increased. But Reich’s description of symbolic analyst jobs underlines how the very features that protected them from routine automation (the combination of analytical skill, a reservoir of knowledge and fluency in communication) may now expose them to a generation of technology that will become increasingly adept at manipulating symbols itself, even if it cannot (yet) ‘think’ or ‘create’. From an architectural drawing to a due diligence report, to an advertising campaign, to a TV show script, to a legal argument, to a news report – there are very few symbolic analyst outputs that LLMs will not be able to prepare, at least in draft.

Revisiting Osborne and Frey

Another way of thinking about the potential impact of more advanced AI on the knowledge economy workplace is to revisit Michael Osborne and Carl Benedikt Frey’s hugely influential analysis. Writing in 2013 Osborne and Frey identified the ‘engineering bottlenecks’ that have held ‘computerisation’ back from specific tasks, and were expected to do so for the next two decades. These included complex perception and manipulation activities, creative intelligence tasks (from scriptwriting to joke-making), and social intelligence tasks (such as negotiation, persuasion, and care).

The growth of LLMs chips away at the second of these, as machines draw on extensive databases to generate coherent content, though their joke-making skills are still a bit iffy. LLMs are also starting to make inroads into the third, as they are deployed as companions or therapists, even if their empathy is performed rather than felt. Engineering bottlenecks still constrain automation, but some are widening much faster than Osborne and Frey predicted. Indeed, one recent assessment suggests that the use of LLM technology will have an impact on around 80 per cent of US workers, with the impact greatest for higher-qualified and higher-paid workers.

That is not to say that AI will ‘destroy jobs’. Like other technologies, AI will probably create new jobs and remodel others. For the moment, there is craft in minding these machines; you need to know how to give instructions, ask questions and evaluate answers. In this, LLMs are like the oracles of classical antiquity, whose riddling utterances contained truth but needed careful interpretation. LLMs can produce good drafts and their accuracy is improving, but they can also ‘hallucinate’ facts, and assert them with a delusional and sometimes aggressive confidence.

This task of interpretation and intermediation is not that far removed from how many professions operate today. Architects, doctors, lawyers, accountants, scriptwriters – even academics – are not pure symbolic analysts, working in an entirely abstract world. Part of their skill, maybe most of it at the top of their professions, is interpersonal – motivating and managing staff, pitching ideas and winning business, convincing clients and colleagues. For these professionals, the current crop of LLMs are best deployed as responsive and multi-talented assistants, which do not get bored, demand pay, or insist on meaningful career development.

Automating menial tasks will disrupt professional development

What does this mean for actual flesh-and-blood assistants and their career development? In many modern professions, life for new recruits is a slog of preparing legal notes, PowerPoint decks, due diligence, and audit reports. I get the sense that some of this is already ‘make-work’, designed to acclimatise a new graduate to the codes and the culture of their profession, but also to give them a chance to see and learn from interactions – in the courtroom, at the client meeting, at the pitch.

If it becomes ever easier and cheaper to commission material directly from machines, that will create a problem not only for future generations of graduates, but also for those at the top of the professions, who will not be able to rely on a stream of graduate trainees to step into their shoes. Even as automation boosts productivity, it will disrupt professional development and may, in the words of one economist, “have stark effects on the value of cognitive labour”.

Furthermore, in the longer term (and I am thinking years not decades), inaccuracy may be less of a problem than the erosion of doubt. A lot of work has already gone into stopping newer LLMs spouting racist opinions like their predecessors did; future models will likely be much clearer about the ‘right answer’ to any question and about the truth of different propositions. Much of this will be helpful, though the lack of transparency and contestability is frustrating.

Minority opinions marginalised and moral judgement at a premium

But as regulation strengthens the guardrails around AI, there is a risk that some minority opinions will be marginalised and eventually expunged. Many of these will be conspiracy theories, junk science and fake news. But they may also be the small voices of gritty corrective to the dominant narrative – the proponents of ‘lab leak theories’ of COVID-19, the dogged campaigners against over-prescription of painkillers, the investigative journalists who stick to the story in the face of denials and threats.

This has inevitably already become a new front in the ‘culture war’, with some media getting angry that ChatGPT refuses to promote fossil fuel use, sing paeans of praise to Donald Trump or say that nuclear war is worse than racist language. So far so funny. But the more the unified version of the truth promoted by AI squeezes out alternative understandings of facts, let alone alternative interpretations of how they should guide our behaviour, the more we will need the ability to challenge and debate that truth, the imaginative capacity to transcend guardrails.

So, what does this all mean for skills? A knowledge economy in which LLMs are increasingly widespread will require critical judgement, a basic understanding of how coding, algorithms and emerging AI technologies operate, the ability to work with clients and colleagues to refine and use results, and the diverse and creative intelligence to challenge them.

Perhaps above all, we will need sophisticated moral judgement. LLMs and their AI successors will be able to do many things, but there will be so many complex judgements to be made about whether and how they should. Who will be accountable for any errors? Is it for a machine to define truth? Should it do so by reference to a consensus, or its own judgements of correspondence to reality? At an existential level, how should we achieve the alignment of AI and human interests? How are the latter to be defined and articulated? What balance of individual and social goods should be struck? Where are the boundaries between humans and machines? Do the machines have rights and obligations?

Today we muddle along, reaching consensus on moral issues through a broad process of societal mediation, with horrible moral errors along the way. Tomorrow, we have the potential for a new age of turbocharged progress and moral clarity, a prospect that is at once scintillating and unsettling.


 

About the author

Richard Brown

Richard Brown is an associate fellow at the School of Advanced Study, University of London, and is undertaking research on higher education and the skills needs of our future workforce.

Posted In: Democracy and culture

1 Comments

LSE Review of Books Visit our sister blog: British Politics and Policy at LSE

RSS Latest LSE Events podcasts

This work by LSE USAPP blog is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported.