LSE - Small Logo
LSE - Small Logo

Serge P.J.M. Horbach

Evanthia Kalpazidou Schmidt

Rachel Fishberg

Mads P. Sørensen

November 12th, 2024

Writing assistant, workhorse, or accelerator? How academics are using GenAI

1 comment | 26 shares

Estimated reading time: 6 minutes

Serge P.J.M. Horbach

Evanthia Kalpazidou Schmidt

Rachel Fishberg

Mads P. Sørensen

November 12th, 2024

Writing assistant, workhorse, or accelerator? How academics are using GenAI

1 comment | 26 shares

Estimated reading time: 6 minutes

Reporting on a nationwide survey among researchers in Denmark, Serge P.J.M. Horbach, Evanthia Kalpazidou Schmidt, Rachel Fishberg, Mads P. Sørensen and colleagues find three clusters of attitudes towards GenAI use for research. They argue these variations reflect differences across disciplines and different knowledge production models.


For better or worse, Generative Artificial Intelligence (GenAI) is being integrated into academic research. Some evidence suggests its widespread use, with traces found in research papers and peer review reports.

The academic discourse on GenAI spans enthusiastic adoption to cautious evaluation and skepticism. While some embrace GenAI tools’ ability to accelerate and enhance research processes, others voice concerns about misinformation, biases, and GenAI’s overall impact on research integrity. The ethical implications of using GenAI in research remain unresolved.

A national survey

Despite its rapid growth, the specific ways researchers are using GenAI are unclear. What tasks are they using GenAI for, and where do they draw the line on its appropriate use? We conducted a nationwide survey among researchers at Danish universities.

Carried out from January to February 2024, we invited all Danish researches (including PhD researchers) to participate in a survey to ascertain just how GenAI is being used in research and how the use of GenAI breaks down across different disciplines and demographics. Of 29,498 invitations, we received 2,534 complete responses, with another 533 respondents answering part of the questions.

Disciplinary differences

The results show across all disciplines most researchers only use GenAI tools for a few research tasks, such as editing research proposals or helping to write code for data analysis (Fig.1). However, there are some noteworthy variations that might reflect important epistemic differences.  These differences can be categorized as nomothetic research, which seeks general, natural laws, versus ideographic research, which aims to explain the particular of individual events, or as differences between positivist and interpretivist traditions and approaches.

Fig.1: respondents’ self-reported use of GenAI for 32 different use cases (in blue) and the perceived use of GenAI for these use cases by direct colleagues (in yellow).

Respondents from the technical sciences, especially those doing experimental science, for example used GenAI for more diverse tasks than their colleagues. Some technical science respondents even used GenAI for more than half of all use cases mentioned in our survey. They also have the highest aggregated research integrity assessment of the 32 use cases, meaning that they on average are more positive towards using GenAI models and tools.

In contrast, scholars from the humanities used GenAI tools the least and had the least positive research integrity assessment of the use of GenAI. Quantitative social scientists indicate they use GenAI tools for substantially more use cases than their qualitative social science colleagues, who also tend to be more skeptical in their assessment of how good or bad different GenAI supported research practices are.

The survey further showed that there were no differences in either use nor research integrity assessment of use cases between men and women. Unsurprisingly, junior scholars reported using GenAI for more research tasks than their senior colleagues. In terms of research integrity, no substantial differences were observed between respondents from different academic age groups.

Amidst this variation in respondents’ views of the research integrity and use of GenAI in research processes we identified three distinct clusters:

GenAI as workhorse

A first cluster, comprising 35% of respondents, consists of academics that feel GenAI can be responsibly used, but only for some research tasks. They approve of GenAI usage for language editing tasks, as well as data analysis and coding tasks. Conversely, they are critical of using GenAI for more creative tasks like generating ideas, hypotheses or experiments and for peer review tasks. In the open text fields, they claimed:

 “[GenAI] is good when used for tedious tasks like formatting, editing, generating a code […] and terrible for creative tasks.”

GenAI as language assistant only

In the second cluster (24% of respondents), we find the most sceptical researchers. Apart from straightforward language editing tasks, they disapprove of GenAI use for all purposes, even though we also here find the most ‘neutral’ or indecisive assessments by respondents about GenAI for research. A respondent in this cluster illustratively described GenAI as ‘a glorified spell checker’.

GenAI as research accelerator

The final cluster (41% of respondents) comprises the GenAI optimists. Respondents in this largest cluster are generally positive about using GenAI for almost all use cases, particularly in relation to data analysis and research design. Lauding GenAI particularly for its efficiency-gain potential.

Policies that meet researchers where they are

Researchers generally held positive attitudes toward GenAI. Its current infrequent use in the research process suggests that the barriers for more use may stem from a lack of familiarity, insufficient training, and the current limitations of GenAI models rather than outright opposition. What is clear from the survey’s qualitative responses is the strong demand for institutional support. Researchers are calling for structured training, better access to tools, and clear regulations to navigate this rapidly evolving landscape. They also express a wish for guidelines that promote the equitable and responsible use of GenAI across various fields and organisations.

For policies and guidelines to be effective and avoid becoming mere paper tigers, they must be closely aligned with the practices of those they are intended to govern. These policies should not only address practical considerations but also take into account how they may influence the legitimacy and perception of GenAI within the research community. Successful integration of GenAI into research practices will depend on thoughtful, inclusive, and—most importantly—adaptable policies that can evolve as GenAI unfolds itself in different ways across communities.

 


This post draws on the authors’ article, Generative Artificial Intelligence (GenAI) in the research process – a survey of researchers’ practices and perceptions, published on SocArXiv.

The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Gorodenkoff on Shutterstock.


 

Print Friendly, PDF & Email

About the author

Serge P.J.M. Horbach

Serge P.J.M. Horbach is an Assistant professor in Science and Technology Studies at the Institute for Science in Society, Radboud University. His main research interests include scholarly communication, particularly peer review practices, and science-society interactions, particularly open science and public trust in science. serge.horbach@ru.nl, https://orcid.org/0000-0003-0406-6261

Evanthia Kalpazidou Schmidt

Evanthia Kalpazidou Schmidt is an Associate professor at the Danish Centre for Studies in Research and Research Policy, Aarhus University. She specializes in higher education policy, research and innovation, gender in knowledge production and research organizations, European gender policies, and research evaluation. She has served as the EC-appointed Danish expert member of the European RTD Evaluation Network and as an expert member of the H2020 Advisory Group on Gender. eks@ps.au.dk, https://orcid.org/0000-0002-3204-0803

Rachel Fishberg

Rachel Fishberg is a Postdoctoral Fellow in Science and Technology Studies at the Danish Centre for Studies in Research and Research Policy, Aarhus University. Her main areas of interest include the politics of knowledge production on a local and global scale, governance in higher education and universities, transnational and interdisciplinary collaborations, as well as policies related to research integrity. rfish@ps.au.dk, https://orcid.org/0000-0002-4386-2933

Mads P. Sørensen

Mads P. Sørensen is a Professor at the Danish Centre for Studies in Research and Research Policy at Aarhus University. His current research focuses on Social Theory and Research on Research, including Research Integrity, Research Ethics, and Research Policy. mps@ps.au.dk, https://orcid.org/0000-0003-2455-2515

Posted In: AI Data and Society | Featured | Higher education

1 Comments