Drawing on a recent survey of academic perceptions and uses of generative AI, Richard Watermeyer, Donna Lanclos and Lawrie Phipps suggest that the potential efficiency promised by these tools disclose which work is and isn’t valued in academia.
Widespread use of generative AI tools in academia seems inevitable. This is one of the conclusions drawn from our recent study of how academics in the UK are using Large Language Models (LLMs). Through a survey of 284 UK academics we found that even those whose use of generative AI is sparse and/or tentative, anticipate the transformation of their professional lives through more efficient and economical ways of working.
At a time when academic success is determined by the speed and subsequent volume of productive output, generative AI seems to offer a way to do things more quickly. Its enthusiasts describe it as providing ‘faster ways to process data’ and in terms of how it ‘saves scientists time and money’. One respondent in our survey reported using LLMs to more than triple the number of research funding applications they had submitted within one year. Others spoke of how LLMs could be used to outsource up to 80% of their work. However, even if generative AI is increasing academics’ work-rate, it’s not clear whether its impact on workload and work quality will witness a commensurate uptick.
even if generative AI is increasing academics’ work-rate, it’s not clear whether its impact on workload and work quality will witness a commensurate uptick.
While we found in our study a common refrain of generative AI alleviating the burden and drudgery of heavy administrative loads, we found scant evidence of generative AI being used to decelerate and exploit the ‘gift of time’ for, among other things, higher order thinking and intellectual growth. Instead, our respondents talked almost exclusively about how LLMs had enhanced their productivity and capacity to meet performance demands.
It may be thus that the intercession of generative AI results not in any major disruption to the nature of academic work, but rather the further magnification of its product. This leads us to ask whether AI-enabled academics might be moving too quickly through the gears, and losing sight of the significance of tasks that are easily automated. It may be that important tasks are overlooked where generative AI is used to fast-track processes. Is the hype around generative AI consequently muddying what academics should be doing and obscuring the reward(s) of such undertaking?
We have seen how generative AI ‘helps’ write manuscripts, grant applications, presentations, e-mails, code, coursework and exam questions. Yet presently, the parameters of ‘help’ are ill-defined and unaided by the immaturity of regulatory intervention and consensus concerning the ethics of AI use. Consequently, just as generative AI may excessively help students in the realm of assessment so too may it excessively ‘help’ academics.
While the affordances of generative AI to academic work may appear obvious, recognising when generative AI should be used and when it shouldn’t, is less so. The danger is that enthusiasm for generative AI as an efficiency aid, obfuscates the tasks that academics should remain focused on. The ‘triumph’ of generative AI in terms of productivity gain may accordingly culminate (if it hasn’t already) in its indiscriminate use.
just as generative AI may excessively help students in the realm of assessment so too may it excessively ‘help’ academics.
By streamlining and removing stages of ‘process’ in academic labour, where these are delegated to machine intelligence, the iterative dimension of knowledge work loses out to ‘algorithmic conformity’, and so too constrains serendipity, happenstance and the unpredictability of scientific discovery. Research rarely pans out as it is initially conceived or intended; the greatest findings are often decided by chance. Moreover, in accelerating already hyper-stimulated work regimes in universities, generative AI may exacerbate an epidemic of burn out among academics and cause them to leave their roles in even greater numbers than are already reported. It may furthermore become as a snare to those whose employment is most precarious and contingent upon the promptness and regularity of their output. Where generative AI further buries exhortations for slow scholarship, disincentivises the hard and long slog of intellectual inquiry, and displaces human-generated content, the contribution of the academic becomes increasingly degraded and susceptible to detraction.
Perhaps, the emergence of generative AI should then be grasped to reflect on and respond to what is already redundant in academia?
Yet these questions of technology-aided or technology-diminished contributions are useful to elucidate the extent to which academia is already automated, already dominated by a culture of profligate repetition and reproduction, ‘copy and paste’ and short-cuts to productive accumulation. Perhaps, the emergence of generative AI should then be grasped to reflect on and respond to what is already redundant in academia? Generative AI might then be assessed as, and developed into a form of help which does not deskill but reskills academics and reenergises their intellectual mission. It might also be used to enjoin a reappraisal of what really matters in academia beyond a false economy of productive efficiency and for the purpose of enculturing a new kind of adaptability among academics amidst winds of alleged change.
If not, academics’ dependency on generative AI as a means of coping with productivity demands will only likely escalate as will exploitation by those who wield precarity unashamedly for the purpose of leveraging maximum value extraction.
This post draws on the authors’ article, Generative AI and the Automating of Academia, published in Postdigital Science and Education.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: Adapted from Morgan Housel via Unsplash.
Excellent, and about time.
About time that academics focus on actual research and actual teaching rather than on writing grant proposals and developing teaching material.
Hopefully, a better way of awarding grants can be found, and the joy of teaching can be rediscovered.
I use it all day every day in my programming course. I could use it to save time as it can complete coursework virtually by itself.
Howeber, that would mean i don’t learn. Instead I use it as a 24 hour personal tutor.
If I don’t unddrstand a coding concept, I ask it all the same questions as I would a human tutor.
Funnily enough fee people use it on my course, possibly due to concerns about plagiarism.
Based on the questions i heae those people ask, and based on assessment scores they don’t understand the material as well.
So tech students are either not using it, using it inneficiently or to borderline cheat, or using it as a tutor. The latter actually takes up time, but it also prepares one for the workplace where coders take every shortcut they can get.
Industry workers are doing the same job, only much faster which seems to have finally burst the ridiculous contract heavy tech market the UK has been trapped in for years.
Those contractors milking businesses by overplaying the amount of experience they have in, say Workday, are no longer commanding £750 per day.
The life-long contractors are now all feeling the pinch of having a jumpy CV, whilst complaining they can’t earn 3-4 x their permanent equivalents anymore.
Gone too are their tax dodging practices, so its going to be tough for them but I won’t shed a tear.
I just hope the UK educational institutions embrace the technology instead of falling for a get-rich-quick anti-gpt scheme as they did over the pond