LSE - Small Logo
LSE - Small Logo

Philippa Hardman

July 18th, 2023

The invisible cost of resisting AI in higher education

3 comments | 86 shares

Estimated reading time: 10 minutes

Philippa Hardman

July 18th, 2023

The invisible cost of resisting AI in higher education

3 comments | 86 shares

Estimated reading time: 10 minutes

The rise of AI presents the very real risk that universities will become irrelevant – or even obsolete – if they resist it. Philippa Hardman explores how higher education might avoid that fate.

On 4 July, a new set of principles was published to help UK universities ensure students and staff are AI literate so they can capitalise on the opportunities technological breakthroughs provide for teaching and learning.

Backed by the 24 vice-chancellors of the Russell Group and developed in partnership with AI and educational experts, the new principles will shape institution- and course-level work to support the ethical and responsible use of software such as ChatGPT in higher education and an increasingly AI-enabled world.

The implications of this development are perhaps more significant than we realise. There has been much discussion in recent months about the risks associated with the rise of generative AI for higher education, with most of the discussion centring around the challenge that ChatGPT poses to academic integrity.

However, much less work has been done on exploring the negative – even existential – consequences that might stem from not embracing AI in higher education. Are these new principles enough to reverse the risk of irrelevance?

The threat of AI – and AI detection

In most universities, we assess student achievement through written assignments. The rise of ChatGPT over the last six months has broken this system. If AI can be prompted to generate written assignments that score higher than average student assignments, where does that leave academic integrity?

As a result of this crisis, we have seen the rise of various responses. In some countries, higher education institutions reacted by returning to in-classroom examinations. Elsewhere, we have seen widespread investment in a new wave of AI detection technologies such as GPTZero and TurnitIn AI, which promise to detect AI-generated content. Both responses are problematic.

On a practical level, returning to in-person examinations is challenging to deliver and unscalable. It also perpetuates a culture of distrust which arguably impacts negatively on teacher-student relationships. As a result, while some institutions will return to in-person exams this summer, many others have already u-turned on this policy.

Meanwhile, although AI detection technologies may seem initially appealing to institutions which already use technology to detect and manage plagiarism, early research into how these tools work shows that there is no such thing as perfect AI detection:

  • OpenAI’s detector only identifies GPT output 26% of the time.
  • They generate false positives. Even OpenAI’s detection tool, for example, has a 9% false positive rate.
  • Even small changes to the text (including asking the AI to revise its own text) can break detectors. I have run tests where I “cheat” on an assignment and, by making just a single edit to an AI output, I was able to fool most detectors.
  • A growing number of anti-detection technologies, such as AI, are being built and advertised to students as allowing them to use AI to write assignments and go undetected.

Other research has also shown that AI detection technologies are biased. In a paper published earlier this year, researchers found that detectors consistently misclassify non-native English writing samples as AI-generated, arguing that “GPT detectors may unintentionally penalise both non-native English writers and writers with constrained linguistic expressions”.

Attempting to ban ChatGPT through the use of detection technologies is clearly deeply problematic – but such use also fails to address the root cause of the problem that ChatGPT has shed new light on: the way that we define and assess “learning” in higher education.

Reversing the risk of irrelevance

If the purpose of higher education is, as UNESCO states, to promote the exchange of knowledge, research and innovation and to equip students with the skills needed to meet ever changing labour markets, by banning the use of tools such as ChatGPT we fail to deliver value and run a high risk of irrelevance in a post-AI world.

What if instead of banning AI, we used it as an opportunity to deliver on our often made but less often delivered promise to operate as a rich cultural and scientific asset which enables personal development and promotes economic, technological and social change?

What if we reimagine “learning” in higher education as something more than the recall and restructuring of existing information? What if instead of lectures, essays and exams we shifted to a model of problem sets, projects and portfolios?

I am often asked what this could look like in practice. If we turn to tried and tested instructional strategies which optimise for learner motivation and mastery, it would look something like this:

Step 1: shift to inquiry-based objectives (IBOs)

Inquiry-based objectives focus not only on the acquisition of knowledge but also on the development of skills and behaviours, such as critical thinking, problem-solving, collaboration, and research skills.

They do this by requiring learners not just to recall or describe back concepts that are delivered via text, lecture or video. Instead, inquiry-based objectives require learners to construct their own understanding through the process of investigation, analysis, and questioning.

By shifting to inquiry-based objectives, we remove the risk of learners “cheating” (generating text-based responses using AI tools such as ChatGPT) and assess instead their ability to use AI tools along with books, articles and other resources to explore, analyse and construct meaning from information.

A key part of this process is the assessment of students’ interactions with AI: how well do they prompt AI? How and how well do they validate what AI tells them? What biases do they identify? How do we support our students not only to develop domain expertise but to be critical consumers – perhaps creators – of AI?

Step 2: design problems and projects

Once we have IBOs, we can shift from a lecture plus essay model and instead design projects or problem sets that learners will complete to achieve the objectives.

To do this, we need to design a project or problem statement for each objective. If instead of lecturing we deliver the learning experience as a project to complete or a problem to solve, we optimise for motivation, knowledge acquisition, and skill development. In short, regardless of age, ability, and discipline, students learn best and most when they learn by doing and discovering, not sitting and listening.

By requiring us to make these changes to how we design and deliver learning, AI forces us to innovate our pedagogies in a way that drives better learning outcomes for learners.

Step 3: design performance-based assessments

When gauging learner success in an inquiry-based approach, it’s essential to create a comprehensive mark scheme that considers not only knowledge acquisition but also the skills demonstrated during the inquiry process. This often involves applied, authentic activities and assessments that require learners to demonstrate their competence in a tangible way.

These assessments provide a more comprehensive and authentic evaluation of a learner’s abilities compared to traditional assessments, such as exams and multiple-choice questioning.

Creating performance-based assessment is a three-step process that involves reviewing each student project and identifying three elements:

  • Skills assessment: the skills the learners need to demonstrate in the course of their project. In the post-AI classroom, this includes the ability to use ChatGPT to explore and understand the topic.
  • Knowledge assessment: the conceptual knowledge and understanding that learners need to demonstrate. In the post-AI classroom, this includes identifying any misconceptions generated from ChatGPT “hallucinations”.
  • Process assessment: the methods, processes and/or behaviours that students need to demonstrate in the course of their project. In the post-AI classroom this includes the ability to both prompt and critique, compare and validate ChatGPT outputs.

Equipping students for the future

The net result of this shift in practice is the introduction of a system of teaching and assessment which outperforms the existing system in two ways.

First, it equips students with the knowledge and skills needed to meet ever changing labour markets. The World Economic Forum’s Future of Jobs Report 2023 found that a majority of the fastest growing roles are in the technology sector. AI and machine learning skills, compounded by skills such as analytical thinking, evidence-based argumentation, and communication skills, are predicted to be the most high-value, in-demand skills within the next six months. By embracing AI in the higher education sector, we prepare our students for this world.

Second, it delivers a more active and participatory learning experience, which we know from 30+ years of learning science research leads to significantly better rates of learner motivation, retention and achievement when compared with the sage on the stage lecture plus essay method.

If universities want to remain relevant and purposeful, we need to recognise that we live in a world where both students and employers value and demand not just knowledge and qualifications but also – and increasingly – these high-value, real-world skills.

As non-traditional routes to skills and employment such as bootcamps and next-generation apprenticeships proliferate, higher education must recognise that failure to engage positively with the rise of AI and grasp the opportunities which come with it presents a very real risk that the institution will become irrelevant and, in some contexts, even obsolete.

However, by embracing AI and investing in the necessary training for both faculty and students, universities have the opportunity to reinvent themselves, once again, as the gold-standard of knowledge and skills needed to meet ever changing labour markets and to adapt to a rapidly changing post-AI world. The time is now, and the choice is ours.

  • Philippa Hardman can be found on LinkedIn and publishes a weekly newsletter on Substack

_______________________________________________________________________________________________________________________  

This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions.   

_________________________________________________________________________________________________________________________  

Image source: photo by Christian Bowen on Unsplash  

About the author

Headshot of Philippa Hardman

Philippa Hardman

Dr Philippa Hardman is affiliated scholar at the University of Cambridge, UK, an expert in AI-Ed and founder DOMS, an evidence-based learning design process.

Posted In: AI | Shifting Frames

3 Comments