LSE - Small Logo
LSE - Small Logo

Utkarsh Leo

July 21st, 2023

Generative AI should mark the end of a failed war on student academic misconduct

2 comments | 40 shares

Estimated reading time: 6 minutes

Utkarsh Leo

July 21st, 2023

Generative AI should mark the end of a failed war on student academic misconduct

2 comments | 40 shares

Estimated reading time: 6 minutes

Higher education institutions have long been locked in a Sisyphean struggle with students prepared to cheat on tests and assignments. Considering how generative AI reveals the limitations of current assessment regimes, Utkarsh Leo argues educators and employers should consider how an obsessive focus on grades has distorted teaching and learning in universities.


Despite a long struggle to end it, the rise in student academic misconduct is a threat to UK higher education. Why is this the case, and can anything be done to fix it?

Misconduct in higher education (HE) is not new. Advances in technology – from the launch of the world wide web and the emergence of Google to wearable cheating gadgets like Monorean – have made misconduct convenient. The opportunity to cheat has only been magnified by the emergence of text-generative AI models. This is evidenced by a recent survey of 1000 undergraduate students that found: 43% respondent-students acknowledged using ChatGPT or similar AI applications; 30% admitted using AI for majority of their assignment; and 17% accepted having submitted AI-generated coursework without edits.

Fig.1 College Students’ Use of AI Tools, Source: Best Colleges.

To what extent do you use AI tools like ChatGPT to complete your assignments, exams, or related schoolworks? (N=216), Source: Best Colleges.

Unethical use of AI, or any form of cheating in relation to an academic assessment, is worrisome. It decimates the student learning process by reducing the opportunity to hone critical thinking, writing and other academic skills. Above all, cheating hurts society, as it produces professionals who lack the necessary skills.

As gatekeepers, higher education institutions (HEIs) work to safeguard academic integrity – from partnering with Turnitin (text matching software), to signing the Academic Integrity Charter, to designing university regulations, code of conduct and policies. Regardless, their efforts have failed. This is because the approach of sanctioning misconduct through disciplinary action (mark reduction, failing the module, or expulsion) delivers little benefit when the probability of identifying misconduct remains low. Moreover, “grades are seen as the currency of learning” and a steppingstone to better career opportunities. As a result, our policies and regulations remain inadequate to address the real incentives behind student cheating.

Students are “conscious decision makers.” They decide what is good for them on the basis of available options. In economic terms, they choose an option that offers maximum utility. As the prime minister recently and controversially declared, students pursue HE because they expect the utility benefits (such as, increased earnings, social status) outweighs the expected costs (for e.g., paying fees or the opportunity cost of forgone earnings). Even today “getting an undergraduate degree is worthwhile financially for most students.” This is borne out by data that shows the number of postgraduate taught students increased from 271,475 in 2012-13 to 493,045 in 2021-22, whereas the number of first-degree students increased from 495,325 to 648,925.

Fig.3 First year education (HE) student enrolments by level of study, Academic years 2012/3-2021/22, Source HESA.

This begets the question: if students are rational and the act of pursuing higher education yields higher returns; then academic misconduct is irrational, as it results in zero learning? A law and economics analysis shows that despite zero learnings, cheating (in online/on-site exams) is a shortcut to receive good grades (a higher degree classification), resulting in broader career prospects. A review of job descriptions show that a number of employers are often looking for graduates with at least a 2:1 under their belt or those with “strong academic backgrounds“. Likewise, similar incentives exist for those interested in pursuing postgraduate education.

Incentives behind cheating become more lucrative when the probability of getting caught for academic misconduct is low. In most British universities, the probability of identifying misconduct is low, because assessors rely on Turnitin to check the originality of student submissions. Regardless of the fact that the accuracy and effectiveness of Turnitin is moot. Turnitin, itself emphasises that it does not offer definite judgment on plagiarism. Rather, its similarity report is to support theacademic judgment of the assessor.

Academic judgment, the major determinant in detecting misconduct is impaired due to time constraint and the volume of student submissions in the UK. Marking and offering detailed summative feedback is a task to be completed within a deadline for moderation, boards and results to be released. Conducting meaningful checks to detect misconduct and impose proportionate penalties (requiring hard evidence as reiterated by the Office of the Independent Adjudicator) is rare. Thus, cheating remains lucrative as most plagiarism suspected cases are treated for poor academic practice, resulting in mark reduction – a profitable outcome for the student who could have otherwise failed.

Regrettably, a similar approach is being adopted to tackle AI-assisted misconduct. A number of institutions have adopted the new Turnitin with AI writing detection capabilities (whereas others have expressed an interest to adopt it later). However, as generative AI models are continuously advancing and improving at generating human seeming content, it is improbable that any AI writing detection tool can flag submissions as AI generated with certainty. If a student (with English as a Second Language) decides to use an AI paraphrasing tool (like Quillbot) to enhance the written quality of the work, the AI detector will flag the submission as AI generated. This will only become more challenging as AI tools are being embedded in MS Office. More worrying is the possibility of GPT detectors that “exacerbate systemic biases against non-native authors” by misidentifying their original work as AI generated. Assessors are therefore most likely to shy away from implementing measures to avoid unsupported accusations, which could result in bad press from the regulator.

Unfortunately, “we have created a system where we value grades more than learning.” Cheating enabling technologies will get better and playing catch up will be tough. To fix this problem will require joint efforts from both HEIs and employers. Our forthcoming research shows misconduct could be limited, if HEIs require students (with emphasis on international students) to complete mandatory training for each year of study to foster fundamental values of academic integrity. Module leaders may consider focusing on experiential learning to enhance the student learning experience. For instance, based on my own teaching experience, it may be more efficient to teach modules like Alternative Dispute Resolution (more specifically, hostile negotiations and decision-making skills) through training simulators that “realistically recreates […] a crisis situation” (like the Hydra/Minerva suite at UCLan).

Additionally, from an assessment angle, it may be advantageous, if we revamp the ways in which we test students. For example, essay styled assessments (having a promising potential for plagiarism) could be limited to one per semester in a module of interest to the student. Whereas, assessment for other modules could use a combination of work done on real-world cases (for law students it could be the law school clinic), internships, project presentations, and problem styled limited hour physical exams. From a skill development angle, it could be rewarding if participation and excellence in extra curriculars (like moot courts) could result in waiver of submission/assessment per competition. Above all, to improve academic judgment, HEIs could consider incentivising good evaluation. For instance, quality of supervision and summative feedback could be taken into consideration by the line manager for career progression.

At the other end of the pipeline, it is imperative that prospective employers take action to address misconduct, as those who engage in academic misconduct are more likely to engage in dishonest acts in the workplace. Therefore, should hiring be grade based? No. Applicants should be tested on skills and application of knowledge to real world problems.

In sum, if we approach the problem in the right way, we may succeed in limiting misconduct and de-escalating grade obsession in favour of learning, which after all, is the true purpose of education.


I am grateful to Shaun Mills, Associate Dean, School of Justice, UCLan for his invaluable comments and feedback.

The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Sergey Zolkin via Unsplash.


Print Friendly, PDF & Email

About the author

Utkarsh Leo

Utkarsh is a Lecturer in Law at the School of Justice, University of Central Lancashire (UCLan) in the United Kingdom. His teaching and research related work addresses law and emerging technology, alternative dispute resolution, economics and public policy. Prior to joining the School of Justice, Utkarsh served as an Assistant Professor of Law for about four years at the National Academy of Legal Studies and Research (NALSAR) in Hyderabad, India.

Posted In: AI Data and Society | Higher education

2 Comments