LSE - Small Logo
LSE - Small Logo

Mark Carrigan

April 27th, 2023

Are universities too slow to cope with Generative AI?

2 comments | 43 shares

Estimated reading time: 7 minutes

Mark Carrigan

April 27th, 2023

Are universities too slow to cope with Generative AI?

2 comments | 43 shares

Estimated reading time: 7 minutes

Similar to other ed-tech trends, generative AI has led to a lively debate and divide between those boosting the potential disruptive nature of the technology and more critical perspectives. Mark Carrigan argues that as generative AI is already making inroads into professional and student practice, the higher education sector cannot avoid engagement and should find new ways of responding faster to new developments.


How will universities cope with generative AI? In asking a question like this there is a risk of taking the hype at face value, even if the metaverse and blockchain were disappointments, this really is the ‘next big thing’. There are immense economic interests at work in the promotion of generative AI, as the technology sector, struggling to cope with a changing economic climate and the failure of its pandemic dream of a ‘screen new deal’, seizes upon generative AI to maintain its powerful position in society. However, if we don’t inquire into how universities respond, there is a parallel risk that we fail to address the practical challenges concerning generative AI that universities are already beginning to grapple with.

The machine production of cultural artefacts has a long history. The procedural generation of text has been practiced since at least 1305. Narrative Science were offering systems to produce business and sports journalism over a decade ago. What has changed with ChatGPT is the immediacy and flexibility of this offering, as well as a phenomenally successful marketing campaign that led it to become the fastest growing consumer application in history. Whilst the humanist assumption that art and culture must originate from the creativity of an individual has been eroded for some time, the current moment is significant due to the widespread realisation and low barriers to entry for those seeking to experiment with cultural production in this way.

if we don’t inquire into how universities respond, there is a parallel risk that we fail to address the practical challenges concerning generative AI that universities are already beginning to grapple with

A frequent analogy compares ChatGPT to a calculator: It runs, that it is only the unfamiliarity of the technology that leads us to imagine using the system is a substitute for creativity, rather than an expression of it; once we have come to terms with its affordances, we will come to see it like using a calculator to undertake arithmetic more easily in order to free ourselves up for other important tasks. The problem with this analogy is calculators are not integral parts of global computational architectures in a multibillion-dollar arms race to dominate our socio-technical future. The practical challenges that universities are facing in the immediate future, such as preserving assessment integrity and how to acknowledge automated contributions to publications, need to be seen in this broader ethical and political context.

To ignore this challenge risks creating chaos in assessments and letting down our students, who will be working in environments where these systems are ubiquitous. However to normalise it builds platform capitalism into the core operations of the contemporary university. These systems are built on computational power and data capture just as much as scientific innovation, with their further growth and development reliant on the continual expansion of the machinery of user engagement and data extraction. OpenAI have been explicit about relying on ‘collective intelligence’ to manage their roll out and refine the system, leaving higher education in the uncomfortable position of institutionalising their business model within the university.

“The scramble for thought leadership and control of the narrative is overwhelming”, as danah boyd recently observed, with competing utopian and dystopian visions laced with a rich vein of usually unacknowledged self-interest. Not only is the university sector no different in these respects, there is the particular form of discursive explosion which the post-pandemic university is prone to; as the pivot towards COVID-19 publications appears to ease off another one is ratcheting up. Google Scholar already records 629 results for the exact search term “Chat GPT”, despite the software only being launched on November 30th 2022. It remains to be seen how generative AI might further accelerate this commentary and analysis. Obviously, this blogpost is part of the explosion, sincere though I feel in writing it, as no doubt do the authors of each of these 629 papers.

The warning boyd offers about the ’scramble’ is timely because of how it leaves “little space for deeply reflexive thinking, for nuanced analysis” concerning the core problem we confront: “How do we create significant structures to understand and evaluate the transformations that unfold as they unfold and feed those back into the development cycles?” This is one which universities currently face in their attempts to solve immediate practical problems (e.g. providing guidance for students about the use of ChatGPT in summative and formative assessment) in a joined up way which lays the groundwork for responding to still unpredictable future developments. Part of the problem is that even a singular system like ChatGPT encompasses a dizzying array of use cases for academics, students and administrators that are still in the process of being discovered. Its underlying capacities are expanding at a seemingly faster rate than universities are able to cope with, evidenced in the launch of GPT-4 (and the hugely significant ChatGPT plug in architecture), all while universities are still grappling with GPT-3.5. Furthermore, generative AI is a broader category than ChatGPT with images, videos, code, music and voice likely to hit mainstream awareness with the same force over the coming months and years.

In what Filip Vostal and I have described as the Accelerated Academy, the pace of working life increases (albeit unevenly), but policymaking still moves too slowly to cope. In siloed and centralised universities there is a recurrent problem of a distance from practice, where policies are formulated and procedures developed with too little awareness of on the ground realities. When the use cases of generative AI and the problems it generates are being discovered on a daily basis, we urgently need mechanisms to identify and filter these issues from across the university in order to respond in a way which escapes the established time horizons of the teaching and learning bureaucracy. Unless policy formulation and decision making can speed up, there is a risk that institutional responses will actually amplify the problems by communicating expectations that are incongruous with a rapidly evolving situation e.g. shifting towards creative forms of assessment without accounting for the growth of text-to-image and text-to-video systems.

Unless policy formulation and decision making can speed up, there is a risk that institutional responses will actually amplify the problems

There are many reasons more agile decision making is needed, but one which is foremost in my mind is the risk of an escalating burden on staff. For example as Phil Brooker and I have explored in our work on coding skills for social scientists, individualised models of ‘digital up skilling’ (either opt-in training institutionally or private use of open resources) can be helpfully replaced by group-based working on real-world problems, with better intellectual outcomes and a lower burden on academics. If  universities fail to develop structures to cope with the implications of generative AI, then academics and professional services staff will be left, as Beck and Beck-Gernsheim once put it, to “seek biographical solutions to systemic contradictions”. There are exciting creative opportunities and ethical challenges are on the horizon. I wish I was more confident in the capacity of universities to realise the former and respond adequately to the latter.

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: DeepMind via Unsplash.


Print Friendly, PDF & Email

About the author

Mark Carrigan

Dr Mark Carrigan is a Lecturer in Education at the University of Manchester where he is programme director for the MA Digital Technologies, Communication and Education (DTCE) and co-lead of the DTCE Research and Scholarship group. He’s the author of Social Media for Academics, published by Sage and now in its second edition. He is currently writing Generative AI for Academics which will be released next year.

Posted In: AI Data and Society | Higher education

2 Comments