LSE - Small Logo
LSE - Small Logo

Mark Carrigan

August 29th, 2023

Superficial engagement with generative AI masks its potential contribution as an academic interlocuter

1 comment | 17 shares

Estimated reading time: 6 minutes

Mark Carrigan

August 29th, 2023

Superficial engagement with generative AI masks its potential contribution as an academic interlocuter

1 comment | 17 shares

Estimated reading time: 6 minutes

The value of generative AI is often dismissed after it fails to produce coherent academic responses to single prompts. Mark Carrigan, argues that careless use of generative AI fails to engage with more interactive ways in which it can used to supplement academic work.


The release of OpenAI’s ChatGPT 3.5 almost a year ago inaugurated a wave of hype characterised by the same self-interested hyperbole familiar from previous tech bubbles. Except in this case there were a range of immediate use cases that suggested this was not just a hype cycle. Early reception within higher education focused on the threat to assessment integrity, as if the global scale of essay mills had not already called this into question years ago. There has been a similar fixation on research outputs in the discussion of how academics might use generative AI systems. While it is significant that we can no longer take for granted that cultural artefacts are the expression of human intelligence, this preoccupation with how generative AI might lead human works to be replaced by machine generated ones has drawn attention away from a more pressing issue: how generative AI might integrate with existing processes within higher education in positive or negative ways.

The observation that generative AI operates in a fundamentally probabilistic manner, like ‘autocomplete on steroids’, lends itself to dismissing the practical implications of these technologies

The observation that generative AI operates in a fundamentally probabilistic manner, like ‘autocomplete on steroids’, lends itself to dismissing the practical implications of these technologies. I fell into this camp until I began to incorporate ChatGPT 4 into my work in an experimental way, rapidly finding a capacity to enhance what I was doing that I found genuinely shocking. As someone philosophically hostile to posthumanism and politically critical of platform capitalism, I was invested in explaining away these developments. At the same time I was fascinated by the speed with which they were being rolled out. I have come to see ChatGPT 4 (and more recently Claude AI) as quasi-intelligent interlocutors, who could make a significant contribution to scholarship. By ‘quasi’ I mean to stress that I have no belief these are, or ever could be, the fabled artificial general intelligence (AGI); but they are as Chris Dede describes an ‘alien, semi-intelligence’, which does something analogous to thinking. There are profound limits on what it can do, an unreliability to how it does it and a range of risks involved in how we use it. But, this does not make what it can do any less impressive.

The point is that you need to take the time to learn what it can do, as well as how to build working routines with it. This means taking a blog post rather than a tweet as your mental model for engaging with generative AI systems, as well as approaching it as a conversation rather than a one-shot instruction whereby you simply tell the system what to do. The notion of ‘prompt engineering’ has already become overinflated, suggesting an arcane science which will lead to employment in the 2020s, much as data science was to the 2010s. There is clearly a skill to doing this nonetheless, albeit one which academics can easily learn through trial and error. Generative AI is not a tool that can be picked up and immediately used in an effective way, not least of all because of how careless use expands their inherent risk of hallucination. I would suggest academics should not use these tools unless they are willing to commit to using them in a reflexive and accountable way.

academics should not use these tools unless they are willing to commit to using them in a reflexive and accountable way.

A case in point, I have noticed a tendency for critical scholars to share examples of how their prompts elicited an underwhelming or superficial reaction from ChatGPT. The uniform feature of these examples was that little thought had gone into the prompt: they were extremely brief, failed to define the context of the request, provided no sense of the result they were expecting and certainly did not provide examples. The lacklustre quality of the ensuing response is not the devastating critique that these scholars seemingly imagine it to be. These are systems which rely on specificity and reward complexity in generating results (garbage in, garbage out as computer scientists are fond of saying). For example, I frequently share blog posts and journal articles with Claude AI to provide background context for the questions I am asking. It has a remarkable capacity to synthesise in plausible and coherent ways if appropriately guided, doing so at a speed no human can match. I share Inger Mewburn’s belief that “the best way to use ChattieG (ChatGPT) is to imagine it as a talented, but easily misled, intern/research assistant, who has a sad tendency to be sexist, racist and other kinds of ‘isms’”. Much as some academics throw files and papers at their postdoctoral researchers expecting them to work it out, so too do they throw lazily articulated requests at generative AI systems before getting frustrated when the results do not meet their (unspecified) expectations. This failure to engage in a mindful and reflective way leaves them ill-equipped to take responsibility for the destructive qualities Mewburn points to, which can be mitigated through careful engagement and review.

My concern is that careless use of generative AI could rapidly spread within higher education, if these systems become normalised. If you approach their use in an instrumental and instructional way, as a means to outsource discrete tasks you would rather dispense, the quality of your work will suffer. It will enable you to do a mediocre version of what you would have done anyway, much more quickly than would otherwise have been possible. In contrast if you approach their use in a reflexive and dialogical way, as an interlocutor with which to develop your ideas, the quality of your work will be enhanced by the richness of generative AI’s contributions. In such a dialogue it can review, synthesise, reframe and critique with remarkable acuity once you have developed working routines which support this. It will always be more enjoyable and (usually) more productive to have these conversations with a human interlocutor. But, this does not diminish the contribution which these systems can make to the process of scholarship.

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Google Deepmind via Unsplash.


Print Friendly, PDF & Email

About the author

Mark Carrigan

Dr Mark Carrigan is a Lecturer in Education at the University of Manchester where he is programme director for the MA Digital Technologies, Communication and Education (DTCE) and co-lead of the DTCE Research and Scholarship group. He’s the author of Social Media for Academics, published by Sage and now in its second edition. He is currently writing Generative AI for Academics which will be released next year.

Posted In: Academic communication | AI Data and Society

1 Comments