Kneejerk fears over student use of generative AI abound, but when students are given space to discuss academic integrity in relation to generative AI, a more nuanced understanding emerges, finds Ugnė Litvinaitė
In the search for a pragmatic response to generative AI (GenAI), many higher education institutions are now rightly considering how their students are actually using GenAI. This makes sense from the perspective of democratic education, with its emphasis on the role of student voice in decision-making, but also in terms of designing an effective policy.
Last year, I conducted research with focus groups of LSE students from diverse backgrounds and disciplines to explore their views, perceptions, and practices regarding GenAI, assessment, and academic integrity. Each focus group was a 90-minute-long in-depth discussion among students in a confidential space. To stimulate discussions, I used social media materials that represented practices possibly amounting to academic misconduct and misuse of GenAI – at least according to the official policy. Based on my findings from this research and other recent empirical studies in this area, I want to highlight the following.
1. Students do not always see the use of GenAI in assessment as a violation of academic integrity
In the focus groups, students widely disagreed with the statement that all use of GenAI constitutes misconduct, despite official policies often saying the opposite. After all, they said, how is using GenAI to generate ideas, understand concepts, code, or perform calculations different from relying on books, conversations, calculators, or Google? Blanket prohibition of GenAI use was seen as unrealistic since “everyone uses it” and detection can be easily circumvented. Such bans were also seen as unjustified and artificial: if it’s available to me in any other life or work situation, why can’t I use it in my academic work? Outright bans on AI are seen as a restraint on using new tools for learning and knowledge production.
Yet students draw a line. Some are concerned about the proportion of submitted work AI-generated content constitutes. However, when students are given space to discuss academic integrity in relation to GenAI in more depth, as in the focus groups I facilitated, a more nuanced approach to the ethical use of GenAI emerges. Students in the focus groups understood it as highly engaged, human-machine collaboration where AI inputs are strictly limited to certain steps or tasks defined by the student, as opposed to outsourcing the whole assessment production to an LLM.
Increasingly, these tools are being treated as collaborators and partners.
To put it simply, students generally think that asking ChatGPT for ideas or help with the structure and fluency of their essay is ethical, whereas simply submitting GenAI-produced work in response to an essay question with minor edits and claiming it as one’s own work is strictly off-limits. This seems to be confirmed by the HEPI survey of 1,250 UK undergraduates, with only 8% reporting to have used AI-generated text for their assessments without editing or after editing with another GenAI tool, and 17% after editing the text themselves.
Among other common uses reported by the students in the focus groups and other recent research are concept explanation, generating ideas, summarising texts (including those behind a paywall), analysing data, structuring essays, developing arguments, and debugging code.
Increasingly, these tools are being treated as collaborators and partners. This is much like the approaches to GenAI use in knowledge work – Centaur and Cyborg – identified in the largest experiment on the implications GenAI use for knowledge worker performance.
2. Students look for guidance on GenAI from their teachers
“LSE should also put a focus on developing critical thought in the students. So for the students to actually think very critically about ChatGPT and what will be of use and what will not be of use. Because the truth is, if you enter some prompt into ChatGPT for it to write the whole essay, the essay won’t be good.” – student quote
Among students, there is little consensus on how trustworthy or indeed useful GenAI is. It’s not a level playing field either, with some students being more AI-savvy than others, and, according to Jisc, some fearing that changes that have introduced charges for accessing some AI tools, will disadvantage some students and make existing inequalities in education even worse.
“I think our education system needs to evolve with technology. So, some of the companies that I’ve interned with in the past or right now, they’re actively using AI in their operations. So it’s important for students to learn how to make the most of AI while also being able to critically analyse or problem solve by themselves.” – student quote
That’s why students praise professors and institutions who set clear and enforceable rules and provide support. Training in prompt engineering skills is an easy win. More importantly, students want guidance on or exploration of ethical, effective, and critical uses of GenAI – as well as its implications, drawbacks, and limits – within their specialisms.
3. Adapting to GenAI includes designing curriculum and assessment ‘immune’ to it
Students want their institutions and courses to adapt to an AI-enabled world. But this does not mean overreliance on GenAI by either students or staff. What emerged from long discussions with the students is that no student wants a wild west situation, which would lead to university degrees losing meaning and value.
While the thoughtful integration of GenAI in courses and assessments would be welcomed by many students, redesigning curricula and assessments to make them ‘immune’ to GenAI is also advocated for. Students want to be taught and assessed on those uniquely human capabilities universities are well-placed to nurture, such as thinking through experience and emotion, certain kinds of problem solving, and producing new knowledge and research.
To students, adapting to widespread GenAI use often also means diversifying assessment, so that GenAI assistance-free performance, especially where it would be useful in life and work, is also assessed. This includes oral assessments and class participation marks, especially as these are believed to incentivise students to engage with their courses throughout the term and reduce assessment stress, both significant drivers of cheating.
“In my department, […] the assessments are unique and individual in such ways that it’s difficult to strictly rely on plagiarism to complete them. So we’ve had things such as case studies and policy proposals, we’ve done Ted talk-style presentations, we’ve created online portfolios, and all of these things, they do encourage critical thinking. They do encourage students to prove what they have learned throughout the course…So redirecting assessments to be more individual, where possible, would be a good idea to keep it ethical as well as actually having the students show their skills.” – student quote
Where strong justification can be made for such artificial situations, invigilated closed-book exams were also acknowledged. Paradoxically, then, what some teachers mean by resisting AI, students see as adapting to it.
4. GenAI is not as unique a challenge as we sometimes portray it. Many root causes for abuse of GenAI are equally true for other types of academic misconduct
There will always be room to abuse GenAI in many forms of assessment. But the root causes of GenAI misuse are not new: they are equally true for other types of academic misconduct. Lack of motivation, discipline, little confidence in one’s ability and knowledge, poor instruction, stress and time pressure are all universal drivers of misconduct. Students receive offers to have their assessments completed for reasonable amounts of money almost every day. It may be easier – and cheaper – to cheat in an AI-enabled world, but foreclosing access to it won’t solve the problem. It’s the root causes that need to be addressed.
As more students use GenAI and it becomes normalised in HE, it becomes unofficially legitimised. Students who don’t use it, feel pressured to use it to preserve a level playing field. In the words of one of the focus group participants,
“The truth is everyone is using it. You will lose out if you don’t cheat with everyone else. So, at that point, I think ethics are kind of out of the way.” – student quote
Developing assessment and feedback literacy among students and staff as well as addressing the issues mentioned above, would go a long way towards ensuring a more constructive approach to the use of AI in assessment.
_____________________________________________________________________________________________ This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions. _____________________________________________________________________________________________
Main image: Mollie Sivaram on Unsplash