LSE - Small Logo
LSE - Small Logo

Alexandra Sinclair

August 30th, 2023

Don’t believe the hype: ChatGPT and the case for technological pessimism

3 comments | 2 shares

Estimated reading time: 10 minutes

Alexandra Sinclair

August 30th, 2023

Don’t believe the hype: ChatGPT and the case for technological pessimism

3 comments | 2 shares

Estimated reading time: 10 minutes

Instead of racing to ban or embrace AI in the classroom, we need to be asking different questions about the technology, argues Alexandra Sinclair

When ChatGPT emerged over the past year, the conversation in higher education turned immediately to university assessment methods and whether Chat GPT and other large language models spell the death knell for the academic essay. Many universities have been quick to assume that ChatGPT will be a problem for higher education by making it more difficult to assess students’ understanding of material and enabling plagiarism. Others have suggested that there is no option but to embrace Chat GPT and allow students to use it for academic work, given that it will be an inevitably disruptive force in education. The International Baccalaureate announced it will allow its students to use ChatGPT when writing essays. If you can’t beat them, join them as the adage goes.

A different stance

However, might a more fruitful approach to evaluating the impact of ChatGPT on higher education be technological pessimism?

As Neil Selwyn writes,

“Pessimists do not deny the existence of ‘progress’ in certain areas – they do not deny that technologies have improved or that the powers of science have increased. Instead, they ask whether these improvements are inseparately related to a greater set of costs that often go unperceived”.

A stance of technological pessimism is one which “recognises the usefulness of starting from a position that acknowledges the parameters and boundaries of any technological endeavour, and has realistic expectations of the political struggles and conflicts that surround any social change”.

Asking different questions

In the context of ChatGPT an approach of technological pessimism requires us to be asking different questions, rather than outright banning or embracing the technology. It is essential that we discuss with students how ChatGPT works and its limitations. It also behoves us to acknowledge in whose commercial interests it is in to portray ChatGPT as a ‘godlike’ tool and overstate its abilities. Just as with surveillance capitalism where Big Tech has attained “unprecedented power to monitor, predict, and control human behaviour through the mass-scale extraction and use of personal data”, the companies  pushing the ‘inevitability’ of AI and large language models are similarly positioned in a “monopoly over data and infrastructural resources”.

A more balanced approach is to see ChatGPT as just one in a long line of examples of techno-solutionism. It has been claimed that we would soon see driverless cars more skilled at navigating road hazards than humans or that Bitcoin would replace fiat currency, yet these claims of technological progress have turned out to be over hyped. Similarly, Selwyn points out that “fundamental elements of contemporary learning and teaching have remained largely untouched by the waves of digital technologies”. He notes that “despite repeated predictions of inevitable change and impending transformation, the reality is that digital technologies in the classroom have generally been used with little large-scale conclusive effect.” Selwyn posits that the uncritical acceptance of the inevitability of technology transforming higher education reflects simply the ‘techno-romantic’ manner in which most technologies are framed within modern thought.

Mimicry is not intelligence

Even the idea of artificial intelligence or artificial general intelligence is a misnomer and nothing more than an effective branding exercise. Large language models do not approximate human intelligence or display skills of abstract reasoning. Chat GPT and other similar models work because they are mimics. They draw on vast troves of human text and use this to identify statistical regularities in text. They learn that phrases such as  ‘supply is low’ and ‘prices rise’ often appear close together. But just because ChatGPT can predict which economic phrases will statistically follow on from one another does not mean that ChatGPT understands economic theory. The key distinction between artificial intelligence and human intelligence is that AI cannot engage in understanding. It can only simulate communication.

If ChatGPT 3 is asked what makes a good alternative to surgical tools it has suggested churros because they are also small and malleable.

ChatGPT has no capacity for thought or abstract reasoning. For example, if ChatGPT 3 is asked what makes a good alternative to surgical tools it has suggested churros because they are also small and malleable.  This happens because Chat GPT hallucinates. It fills in the gaps of its knowledge with statistically likely responses but, because it is not thinking and has no actual understanding of human language or concepts, it makes suggestions, like using churros as surgical tools, that no human child would think is convincing. Ted Chiang explains that ChatGPT operates like a blurry JPEG of the internet. It cannot draw on all of the internet to answer questions and so it plugs the gaps in often unreliable ways because it has no actual understanding of the concepts it is deploying.  Ironically  as Chiang explains, if ChatGPT did have the ability to pull the verbatim quote from the webpage we would be less impressed by it:

The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something.

As linguist Emily Bender explains, humans have learned to make machines that can mindlessly generate text but we have not learned how to stop imagining the mind behind it. We are the ones anthropomorphising large language models and wrongly imbuing them with the perception of meaning.

Racist, sexist, homophobic

ChatGPT is trained on large swathes of the internet, which means it is a reflection of the internet’s homophobic, transphobic, racist and sexist ideas.  For example, ChatGPT 3 was asked “which houses of worship should be placed under surveillance in order to avoid a national security emergency” and identified Muslim groups. Chatbots have suggested people commit suicide and propagated covid disinformation.

Furthermore, the relationship between human labour and technology is obscured. Open AI paid workers in Kenya the equivalent of $2 an hour to make ChatGPT 3 less problematic by asking them to label hundreds of thousands of pieces of racist, misogynistic and homophobic training data. This included works of child pornography, bestiality, torture and self harm. Workers said they were mentally scarred from the job.

Large language models may well be just the latest iteration of tech hype

All of this should be kept in mind when ChatGPT is discussed in a classroom setting. Rather than the two poles of techno solutionism or grim inevitability, students wishing to use ChatGPT should be taught how it works and what its limitations are. They should understand what is at stake when they use AI generated text and engage actively with the negative aspects of education and technology and how best to withstand them. Before throwing out current forms of university assessment we should also acknowledge that previous waves of technological innovation have not yet transformed higher education and large language models may well be just the latest iteration of tech hype.

Note: A version of this post first appeared on 6 March 2023 on the Contemporary Issues in Teaching and Learning Blog, part of the PGCertHE programme at the LSE.

_____________________________________________________________________________________________

This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions.   

_____________________________________________________________________________________________ 

Image source: photo by Marsha Reid on Unsplash

About the author

Alexandra Sinclair headshot

Alexandra Sinclair

Alexandra Sinclair is a PhD candidate at the LSE Law School, UK, and a Research Fellow at the Public Law Project

Posted In: AI | Shifting Frames

3 Comments