LSE - Small Logo
LSE - Small Logo

David Beer

Jennifer Chubb

February 24th, 2022

tl;dr – AI and the acceleration of research communication

1 comment | 25 shares

Estimated reading time: 6 minutes

David Beer

Jennifer Chubb

February 24th, 2022

tl;dr – AI and the acceleration of research communication

1 comment | 25 shares

Estimated reading time: 6 minutes

AI is forecast to become increasingly central to many aspects of life and work. The same trends can also be detected in research. Drawing on a recent study of expert perceptions of AI uses in research and taking the recently launched tl;dr tool as a salient example, Jennifer Chubb and David Beer discuss AI’s emerging role and its potential to act as a narrator and bridging tool to communicate research to different audiences.


If we are not already cyborg researchers, then the promise is that we soon will be. As AI systems expand their reach across society, academic research will inevitably be drawn further into its orbit. For some time algorithmic systems have been shaping knowledge and learning, the key application of AI to research is developing on two related fronts, both of which mirror wider pressures within academia.

First are the attempts to use AI to speed-up research processes. The wider acceleration of what Filip Vostal has called ‘academic timescapes’ is to be facilitated further by AI. The second set of developments are to do with the push for research accessibility and impact. In this case AI is proposed as a solution to gaps in communication. These AI are presented as automated narrators of research findings, able to adapt those findings to different audiences on our behalf.

Drawing on recent research into current research practices and an exploration of playful evaluation tools that are aimed to improve research accessibility and communication we can quickly see how both speed and accessibility are becoming central promises of AI.

Firstly, In terms of accelerating the research process, there is understandable excitement about the opportunities AI brings for the analysis of large amounts of data and the way this might ‘speed up’ the process of research. Thus, in theory, this brings with it improved research ‘productivity’, reach and impact.

These AI are presented as automated narrators of research findings, able to adapt those findings to different audiences on our behalf.

In our imagined knowledge economies, the ability to communicate research to different audiences is similarly a skill in ever increasing demand, not least when ‘lay summaries’ or ‘plain English abstracts’ are expected in funding proposals. This push inevitably creates niches into which AI can be proposed as a solution.

One example of such a tool comes in the form of AI narrative-making tl;dr papers. A novel tool using GPT-3 technology and promising to transform research descriptions so as to engage a ‘second grader’. The tl;dr tool, powered by GPT-3 – Open AI’s language generator – can create human-like text and looks at first sight to be incredibly good at doing so. GPT-3 can generate any kind of text, any kind of story. It can write articles. It can even generate guitar tablature for your favourite song.

Tl;dr is not the first such ‘readability’ tool to attempt to provide support in making accessible research and academic texts that are thought to be too complex or convoluted. The Paper Digest tool condenses article arguments into key points and seeks to reduce the time it takes a reader to get to grips with research. Replacing more problematic ‘readability tools’ like the long standing ‘Gunning Fog Index’ which attempts to condense text down to an accessible format. There are many reviews of this tool on Twitter, with first impressions being largely positive. To test this out we thought we would look at some of our impressions. In an example from one of our papers about the Ethics of Conversational AI for Children’s Storytelling, the following abstract was run through tl;dr:

The outcome is, admittedly, an amusing take on the article. On first reading it doesn’t do too bad a job. Till we got to the arms and legs… But many Twitter posts noted a tendency for the tool to focus on one aspect of an abstract. This was the case with this abstract, where important elements about the context of the study e.g. ethics, children and storytelling were somewhat missing in the newly constructed narrative. It would seem that, inevitably, there remains significant potential for misreading and misunderstanding.

Key to this is, that the AI isn’t understanding the research itself – it is trying to offer an interpretation based on the terms used and how they might be rendered into a language that is imagined to be more accessible. What is often lost in such processes is the actual nuances that underpin originality in research. However, this does not stop the AI from becoming a narrator for research charged with finding the means and words for accessing imagined audiences beyond those that are assumed to be able to access the research.

The use of AI tools come surely with a warning, as we have seen with the impact agenda and other forms of metricisation in higher education. Using AI as a way of ‘speeding up—to keep up’ with bureaucratic and metricised processes, may proliferate negative aspects of academic culture. The pressure to be ever quicker and more productive is one that is felt widely. If AI is to assist, rather than exacerbate, such pressures, its expansions in research should assist and not replace human creativity.

As tools that narrate our research for us move closer to the mainstream, we may need to think about the notion of accessibility that is coded into them.

So, is AI a good solution for this kind of engagement? In a recent paper, one of the authors has found that AI was perceived by experts in the field as helpful in supporting the research process with respect to information gathering and other narrow tasks, but perhaps more surprisingly also in support of impact and interdisciplinarity, open innovation, public engagement and citizen science.

Participants regularly referred to the idea that AI could act as a bridge beyond the university context and that boundaries could be expanded through greater participation in science. The notion of the ‘bridge’ is, we would suggest, important here. This bridge is constructed in a way that responds to existing perceptions and notions of accessibility. Taking on the role of automated narrator the AI efficiently becomes the proposed solution to this bridging.

As tools that narrate our research for us move closer to the mainstream, we may need to think about the notion of accessibility that is coded into them. How this leads our research to be automatically narrated and what this might mean for the imagined audiences that are trying to be accessed through the possibilities of AI. We will also need to think about what it will mean if we let AI both speed-up and narrate the research process, especially as this may bring with it certain values and ideals. As Kate Crawford has recently argued in her book Atlas of AI, ‘the task is to remain sensitive to the terrain and to watch the shifting and plastic meanings of the term “artificial intelligence” – like a container into which various things are placed and then removed – because that, too, is part of the story’.

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: note thanun via Unsplash.


 

Print Friendly, PDF & Email

About the author

David Beer

David Beer is Professor of Sociology at the University of York. His most recent book is The Tensions of Algorithmic Thinking.

Jennifer Chubb

Dr Jennifer Chubb is a Research Fellow at the University of York. Jenn’s research career to date has focused on the philosophy and politics of research and the societal and ethical implications of emerging science and technology and she is an appointed advisor to the Better Images of AI project. Her current research focuses on the impact of Artificial Intelligence and related digital technologies with the AI, What’s that Sound? Project.

Posted In: AI Data and Society | Research communication

1 Comments