LSE - Small Logo
LSE - Small Logo

Eric A. Jensen

February 9th, 2024

Should the generative AI scholar be fast or slow?

2 comments | 9 shares

Estimated reading time: 6 minutes

Eric A. Jensen

February 9th, 2024

Should the generative AI scholar be fast or slow?

2 comments | 9 shares

Estimated reading time: 6 minutes

Across all disciplines norms are being established around the role of generative AI in research. Discussing a recent LSE Impact Blogpost on academic responses to AI, Eric A. Jensen suggests that clear, practical guidance needs to be developed by specific research associations and institutions.


Large language models (LLMs) have already altered the landscape of academic research. In a recent blogpost for Fast Track Impact alongside colleagues, I summarised a few of the practical ways we have started to use generative AI to boost our efficiency and effectiveness in tasks ranging from research communication, to funding applications, survey design and data analysis. There is now a nascent body of academic literature that highlights opportunities and critiques of using generative AI (GenAI) in contemporary research and the decisions being taken now will shape how this technology is used in the future.

Conducted in the summer of 2023, the survey found that 51.5% said ‘Yes’ to the question, ‘Do you use generative AI tools (like ChatGPT) for work-related purposes?’.

The initial public reaction to GenAI in academia has skewed negative. This is understandable. These tools come with the usual ethical baggage of Big Tech, connotations of plagiarism, highly questionable use of the intellectual property and many other legitimate concerns. In a recent blogpost for the LSE Impact Blog and research paper (Generative AI and the Automating of Academia) Richard Watermeyer, Donna Lanclos and Lawrie Phipps make a valuable contribution outlining potential sociological implications of GenAI’s integration into academic professional life. Reporting the results of an online survey of UK academics they express concern that GenAI tools will further embed the malignant neoliberal managerialism that has infected UK academia (and beyond). Conducted in the summer of 2023, the survey found that 51.5% said ‘Yes’ to the question, ‘Do you use generative AI tools (like ChatGPT) for work-related purposes?’. While there is no way to know if this is a representative finding given the convenience sampling used, it highlights the impact GenAI is now having on academic life.

The survey records positive and negative views from respondents, although the authors interpret this evidence in a largely negative way arguing GenAI:

“further contracts academics’ engagement with critical thinking and contemplative labour […]. In deluding academics of the merits of its productive efficiencies G[en]AI may become a weapon of some considerable self-harm”.

and

“In accelerating already hyper-stimulated work regimes in universities, generative AI may exacerbate an epidemic of burn out among academics and cause them to leave their roles in even greater numbers than are already reported”.

However, this evidence could just as easily run in the opposite direction and the claim could be made that GenAI can reduce academic burnout. The full dataset for this survey has not been made publicly available, but even selected quotations include a suggestion that GenAI tools can reduce workload pressure, for example, “It takes some of the pain out of admin”. Moreover, the finding that “83% of respondents stated that they anticipated using [GenAI] tools more in the future” suggests that UK academics feel these tools are useful.

These views of GenAI can be contrasted with optimistic accounts, such as those of Salah et al., who see potential for improving social science data collection and analysis. This viewpoint emphasises how GenAI can deliver superior research processes or outcomes in specific domains, such as textual analysis and developing analysis categories for qualitative data. In this instance because of the efficiency of GenAI tools, the expectation is that they will become ever more central to research. GenAI optimists tend to see the ethical, justice, intellectual property, privacy, data protection, methodological and theoretical issues raised by GenAI use as significant, but manageable, and they tend not to comment on the sociological implications beyond the potential for job losses. Among GenAI ‘cautious optimists’ there is a recognition that clear ethical guidelines must be established, along with institutional regulations, to limit the use of these tools in order to mitigate risks and potential harms.

Claims to moral purity based on avoiding GenAI tools are dubious at best.

Both optimistic and pessimistic accounts of GenAI’s evolving role in academic research acknowledge that the practical benefits of these tools are driving their use. I have seen these benefits in my own professional experience, including enabling me to rapidly re-size and re-shape my academic research outputs for different audiences and uses (e.g., shifting from a journal article to a conference presentation). With new tools, there is a tendency to only focus on risk. For some, this is a way of avoiding the need to learn new, unfamiliar ways of doing things. Change can be uncomfortable, and claiming to be resisting the adoption of new tools to maintain moral purity can be an effective short-term tactic to stave off the inevitable.

Claims to moral purity based on avoiding GenAI tools are dubious at best. Anybody who has ever accepted a suggested change from an automated spelling or grammar check in Microsoft Word already has some experience with using an AI tool to improve their work. Would it really make sense to invest the extra time or resources in further iterations of proofreading so that you could avoid using an automated spelling or grammar check?

Similarly, Watermeyer et al. introduce a somewhat paternalistic concern that “enthusiasm for generative AI as an efficiency aid, obfuscates the tasks that academics should remain focused on”. This seems to ignore the fact that academics will always have finite time available to distribute to varied responsibilities. Even in a utopian scenario, scarce attention and intellectual effort would still need to be allocated between competing needs. Drawing on tools to efficiently address basic tasks should be viewed as a positive, regardless of the level of neoliberal managerialism in an academic system.

Is it ethically justifiable to waste time doing that same work much more slowly when that may mean giving less attention to more important matters?

There are opportunity costs, ethical issues and risks that come with the decision to shun a new, useful tool that could improve productivity and research quality. Working slowly and inefficiently can be an indulgent luxury. While, we all wish to see more realistic workload models, there is nothing inherently praiseworthy about ‘slowing down’ academic work by disconnecting it from real-world developments. Working time is limited, so every decision about where to invest effort is a trade-off between competing needs and priorities. Readily available GenAI tools such as ChatGPT can speed up a task such as converting a research article into a draft slide deck or converting a reference list to the correct citation style. Is it ethically justifiable to waste time doing that same work much more slowly when that may mean giving less attention to more important matters? Such ethical issues are not discussed because the status quo is taken for granted. Professionals will have to weigh up the costs, risks and benefits for themselves when choosing to use or reject any platform or tool.

Waiting to engage with the practicalities of how best to use GenAI tools until all risks and concerns have been ironed out to everyone’s satisfaction is a poor solution. Let’s consider instead: What are these GenAI tools good for, and what are they not good for? Which GenAI research uses are ethically acceptable and unacceptable, and why? These are the kinds of questions that move us forward towards a balanced adoption of a useful set of tools. It is crucial for professional research associations to promptly address this issue and offer clear, public guidance. The sooner this is done, the better for the entire research community.

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Google DeepMind via Unsplash.


 

Print Friendly, PDF & Email

About the author

Eric A. Jensen

Prof. Eric A. Jensen is a Visiting Research Scientist at the National Center for Supercomputing Applications (NCSA), University of Illinois Urbana Champaign and an Associate Professor at the University of Warwick. Jensen’s books include Doing Real Research: A Practical Guide to Social Research and Culture and Social Change: Transforming Society through the Power of Ideas. His PhD is in sociology from the University of Cambridge.

Posted In: AI Data and Society | Higher education

2 Comments