LSE - Small Logo
LSE - Small Logo

Richard Watermeyer

Donna Lanclos

Lawrie Phipps

January 22nd, 2024

If generative AI is saving academics time, what are they doing with it?

2 comments | 22 shares

Estimated reading time: 6 minutes

Richard Watermeyer

Donna Lanclos

Lawrie Phipps

January 22nd, 2024

If generative AI is saving academics time, what are they doing with it?

2 comments | 22 shares

Estimated reading time: 6 minutes

Drawing on a recent survey of academic perceptions and uses of generative AI, Richard Watermeyer, Donna Lanclos and Lawrie Phipps suggest that the potential efficiency promised by these tools disclose which work is and isn’t valued in academia.


Widespread use of generative AI tools in academia seems inevitable. This is one of the conclusions drawn from our recent study of how academics in the UK are using Large Language Models (LLMs). Through a survey of 284 UK academics we found that even those whose use of generative AI is sparse and/or tentative, anticipate the transformation of their professional lives through more efficient and economical ways of working.

At a time when academic success is determined by the speed and subsequent volume of productive output, generative AI seems to offer a way to do things more quickly. Its enthusiasts describe it as providing ‘faster ways to process data’ and in terms of how it ‘saves scientists time and money’. One respondent in our survey reported using LLMs to more than triple the number of research funding applications they had submitted within one year. Others spoke of how LLMs could be used to outsource up to 80% of their work. However, even if generative AI is increasing academics’ work-rate, it’s not clear whether its impact on workload and work quality will witness a commensurate uptick.

even if generative AI is increasing academics’ work-rate, it’s not clear whether its impact on workload and work quality will witness a commensurate uptick.

While we found in our study a common refrain of generative AI alleviating the burden and drudgery of heavy administrative loads, we found scant evidence of generative AI being used to decelerate and exploit the ‘gift of time’ for, among other things, higher order thinking and intellectual growth. Instead, our respondents talked almost exclusively about how LLMs had enhanced their productivity and capacity to meet performance demands.

It may be thus that the intercession of generative AI results not in any major disruption to the nature of academic work, but rather the further magnification of its product. This leads us to ask whether AI-enabled academics might be moving too quickly through the gears, and losing sight of the significance of tasks that are easily automated. It may be that important tasks are overlooked where generative AI is used to fast-track processes. Is the hype around generative AI consequently muddying what academics should be doing and obscuring the reward(s) of such undertaking?

We have seen how generative AI ‘helps’ write manuscripts, grant applications, presentations, e-mails, code, coursework and exam questions. Yet presently, the parameters of ‘help’ are ill-defined and unaided by the immaturity of regulatory intervention and consensus concerning the ethics of AI use. Consequently, just as generative AI may excessively help students in the realm of assessment so too may it excessively ‘help’ academics.

While the affordances of generative AI to academic work may appear obvious, recognising when generative AI should be used and when it shouldn’t, is less so. The danger is that enthusiasm for generative AI as an efficiency aid, obfuscates the tasks that academics should remain focused on. The ‘triumph’ of generative AI in terms of productivity gain may accordingly culminate (if it hasn’t already) in its indiscriminate use.

just as generative AI may excessively help students in the realm of assessment so too may it excessively ‘help’ academics.

By streamlining and removing stages of ‘process’ in academic labour, where these are delegated to machine intelligence, the iterative dimension of knowledge work loses out to ‘algorithmic conformity’, and so too constrains serendipity, happenstance and the unpredictability of scientific discovery. Research rarely pans out as it is initially conceived or intended; the greatest findings are often decided by chance. Moreover, in accelerating already hyper-stimulated work regimes in universities, generative AI may exacerbate an epidemic of burn out among academics and cause them to leave their roles in even greater numbers than are already reported. It may furthermore become as a snare to those whose employment is most precarious and contingent upon the promptness  and regularity of their output. Where generative AI further buries exhortations for slow scholarship, disincentivises the hard and long slog of intellectual inquiry, and displaces human-generated content, the contribution of the academic becomes increasingly degraded and susceptible to detraction.

Perhaps, the emergence of generative AI should then be grasped to reflect on and respond to what is already redundant in academia?

Yet these questions of technology-aided or technology-diminished contributions are useful to elucidate the extent to which academia is already automated, already dominated by a culture of profligate repetition and reproduction, ‘copy and paste’ and short-cuts to productive accumulation. Perhaps, the emergence of generative AI should then be grasped to reflect on and respond to what is already redundant in academia? Generative AI might then be assessed as, and developed into a form of help which does not deskill but reskills academics and reenergises their intellectual mission. It might also be used to enjoin a reappraisal of what really matters in academia beyond a false economy of productive efficiency and for the purpose of enculturing a new kind of adaptability among academics amidst winds of alleged change.

If not, academics’ dependency on generative AI as a means of coping with productivity demands will only likely escalate as will exploitation by those who wield precarity unashamedly for the purpose of leveraging maximum value extraction.

 


This post draws on the authors’ article, Generative AI and the Automating of Academia, published in Postdigital Science and Education.

The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Adapted from Morgan Housel via Unsplash.


Print Friendly, PDF & Email

About the author

Richard Watermeyer

Richard Watermeyer is professor of higher education and codirector of the Centre for Higher Education Transformations at the University of Bristol, UK.

Donna Lanclos

Donna Lanclos is an anthropologist, and a senior research fellow in technology enhanced learning at Munster Technological University, Ireland.

Lawrie Phipps

Lawrie Phipps is the senior research lead at JISC, UK.

Posted In: AI Data and Society | Featured | Higher education

2 Comments