LSE - Small Logo
LSE - Small Logo

Bert Verhoeven

Vishal Rana

June 2nd, 2023

Knowledge work and the role of higher education in an AI era

5 comments | 69 shares

Estimated reading time: 7 minutes

Bert Verhoeven

Vishal Rana

June 2nd, 2023

Knowledge work and the role of higher education in an AI era

5 comments | 69 shares

Estimated reading time: 7 minutes

As AI becomes increasingly entangled into different forms of knowledge work, Bert Verhoeven and Vishal Rana discuss how higher education can adapt to meet the needs of a changing labour market. Pointing to the limits of traditional forms of testing in higher education and the benefits of study in practice and authentic assessment, they argue higher education can reshape itself in ways that emphasise more and better skills than knowledge retention.


OpenAI’s CEO, Sam Altman recently provided a warning to a panel of US senators about potential disruption to the careers of knowledge workers around the globe. Large Language Models (LLMs) – like ChatGPT, BingChat, and Bard – demonstrate unparalleled capabilities in myriad areas. Although not without flaws (yet), these include, but are not limited to data storage, query responses, the generation of essays, reports, academic papers, policies, strategies, legal documentation, and coding. These skills epitomise the expertise of knowledge workers worldwide. With AI technologies beginning to transform our world, it is essential that we critically re-think our curriculum, pedagogical methodologies, and assessment approaches to equip students adequately for a rapidly evolving landscape.

Large Language Models (LLMs) ~ demonstrate unparalleled capabilities in myriad areas. Although not without flaws (yet), these include, but are not limited to data storage, query responses, the generation of essays, reports, academic papers, policies, strategies, legal documentation, and coding. These skills epitomise the expertise of knowledge workers worldwide

In our recent experience with students leveraging LLMs for educational pursuits, such as design research, ideation, critical and creative thinking, prototyping, and concept testing, we’ve observed three distinct reactions to AI’s incorporation as a learning tool. (1) The overwhelming majority shows pure enthusiasm and awe. (2) A smaller group expresses perplexity and worry, voicing concerns such as: “What is my role in this new world order?”. (3) Finally, a smaller percentage of students respond with palpable fear and resentment towards the technology, steadfastly refusing to utilise it. It is incumbent on educators to address these varying reactions and ensure educational systems are robust and adaptive enough to meet the needs of all learners. This includes a responsibility to prepare students with higher-order skills, such as emotional intelligence, collaboration, creativity and critical thinking that are likely to be required in the future. It requires a shift to experiential teaching methods, authentic assessment and adaptation of AI tutors to equip knowledge workers to not only coexist with AI, but thrive alongside it.

Quality assurance and assessment in the age of AI

Two examples: In discussions with a computer science academic recently, concerns were raised about first-year students potentially using ChatGPT for assignments. The academic shared their prediction about the ability to identify submissions crafted with assistance from ChatGPT. They noted unusually high proficiency in some submissions, absent of the expected rookie errors for first year students, suspecting the use of ChatGPT, but lamented the lack of tools to confirm this.

In discussions with a management academic , we learned that they tried to circumvent ChatGPT use by creating video-based case studies featuring industry experts discussing real management challenges in their organisations. However, we used a free transcription app Otter.ai to transcribe the videos and input the transcriptions into ChatGPT, achieving more than a passing grade, although not a High Distinction (yet) to demonstrate the vulnerability of video assessments to AI-assisted cheating. We were labelled “evil” by the academic.

Upon inquiring further, ChatGPT itself candidly identified four categories of assessments that could be vulnerable to cheating. (1) Essays or Reports: Given a prompt or a keyword, ChatGPT can generate coherent and relevant texts, based on data analysis and the use of any template tools like SWOT analysis or Business Model Canvas. (2) Unsupervised Quizzes or Tests: comprising factual or multiple-choice questions. It is reported that ChatGPT-4 passed many higher education exams and achieved a score in the 90th percentile on the US Bar Exam. (3) Coding Assignments: Tasks that require students to write a program or a script. ChatGPT can write code in various languages enabling students to complete assignments without any programming skills. (4) Creative Writing Assignments: These tasks require students to produce a story, poem, song, etc. ChatGPT can generate creative texts in diverse genres and styles based on a prompt or a theme. Foremost a culture of honesty and integrity fosters great learning, but the use of the four assessment methods above makes it not only easy for students to cheat, but also hard for honest students to use AI to its full capacity.

Balancing Examinations and Experiential Learning

To combat cheating, an ongoing debate within academia has emerged, with two prominent perspectives – one advocating for the value of monitored examinations and the other championing experiential learning with authentic assessments, both professing their respective efficacy in battling academic dishonesty. We believe there is a place for both perspectives, but we should clearly scrutinise whether traditional examinations alone, predominantly assessing knowledge retention, hold their relevance in an AI era. Examinations are primarily focused on gauging pre-existing knowledge, but they frequently fall short when it comes to assessing real-world knowledge application, critical thinking, problem-solving, collaboration, and communication skills. Traits that are increasingly vital in our rapidly evolving workforce. Despite its formidable capabilities, AI presently struggles to grasp context or form value judgments, aspects at which humans are remarkably adept. Humans possess an intrinsic knack for interpreting complex situations and producing valuable, innovative outcomes that benefit others. As such, knowledge work might shift from being one of managers and creators to editors and facilitators.

AI presently struggles to grasp context or form value judgments, aspects at which humans are remarkably adept. Humans possess an intrinsic knack for interpreting complex situations and producing valuable, innovative outcomes that benefit others.

Human Adjunct

As AI becomes more intricately woven into the fabric of our lives, it’s crucial to cultivate future skills in which AI serves as a human adjunct. Take, for instance, the sphere of emotional intelligence. Skills like empathy, motivation, self-regulation, collaboration, and social abilities are paramount for roles that involve understanding and addressing people’s needs and developing compassion. While AI can aid in the ideation process and churn out outputs based on learnt patterns, it grapples with generating truly original, context-sensitive ideas (user-driven innovation), a distinctly human capacity. Similarly, AI complements human efforts in the realm of complex problem-solving. While AI excels at tasks that adhere to set patterns or rules, it falters when confronted with intricate problems necessitating a nuanced grasp of the context to spawn innovative solutions. Skills for confronting and resolving such challenges will continue to be in high demand. Other critical competencies include ethical judgment and learning agility – the capacity to swiftly grasp new tools, systems, and concepts. Balancing these core human skills with AI’s capabilities is vital in preparing for a future where human and artificial intelligence work synergistically to solve the world’s challenges.

Harnessing AI Complementary Competencies: Preparing for the Future

In a rapidly evolving academic landscape, the limitations of traditional assessment methods like question-based essays and reports are increasingly exposed. Monitored exams, despite their continued relevance in maintaining quality assurance, are being supplemented by a pedagogy that embraces experiential learning. This hands-on approach fosters real-world skills, offering an AI friendly alternative to conventional lecture-tutorial-exam formats. We suggest a shift towards authentic assessments aligned with this methodology, such as portfolios, project-based assignments, and (simulated) real-world tasks. This more resource-intensive approach has the potential to disrupt the current higher education business model, but Universities that ignore or ban ChatGPT may be hurting their own admissions. Such a shift would allow future knowledge workers to shine where AI falls short, while also enabling AI to enhance their work. The rise of AI should not be seen as a threat but a catalyst for growth, encouraging a symbiotic relationship between human potential and artificial intelligence.

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Google DeepMind via Unsplash.


Print Friendly, PDF & Email

About the author

Bert Verhoeven

Bert Verhoeven is the Program Director of Innovation and Enterprise at Flinders University, Adelaide, Australia. He has led the largest competency-driven, cross-university, Innovation & Enterprise (INNO) program in higher education in Australia (Flinders University), based on both scientific depths of expertise, evidence, and industry practice. He has founded and developed several ventures and spent the last 12 years working on continuous improvement of a learning-led approach to Innovation capability building and culture change, facilitating and coaching start-ups, students, SMEs and (non) executive innovators in large organisations. You can reach out to him at:  bert.verhoeven@flinders.edu.au

Vishal Rana

Dr Vishal Rana, Discipline Leader Management, Department of Business, Strategy and Innovation, Griffith University, Australia. His research interests are in work design and entrepreneurship. He is the founder of multiple start-ups and has a PhD in Human Resource Management from Griffith University, Australia. You can reach out to him at v.rana@griffith.edu.au

Posted In: AI Data and Society | Higher education

5 Comments