AI tools are available today that can write compelling university level essays. Taking an example of sample essay produced by the GPT-3 transformer, Mike Sharples discusses the implications of this technology for higher education and argues that they should be used to enhance pedagogy, rather than accelerating an ongoing arms race between increasingly sophisticated fraudsters and fraud detectors.
With all the focus in artificial intelligence and education on AI personalised learning and tutoring systems, one AI technology has slipped by unnoticed. It could become a gift for student cheats, or a powerful teaching assistant, or a tool for creativity. It’s called a Transformer, it acts as a universal language tool and it is set to disrupt education. Here’s how.
Take a minute to read this graduate student essay on the controversial topic of “learning styles”. If this were your student, how would you assess the essay? What mark would you give? What are its strengths and weaknesses?
∞∞∞
Learning Styles
The construct of “learning styles” is problematic because it fails to account for the processes through which learning styles are shaped. Some students might develop a particular learning style because they have had particular experiences. Others might develop a particular learning style by trying to accommodate to a learning environment that was not well suited to their learning needs. Ultimately, we need to understand the interactions among learning styles and environmental and personal factors, and how these shape how we learn and the kinds of learning we experience.
Learning styles and learning modes
In the 1970s, learning styles became an important feature of educational practice. In the 1990s, “learning modalities” became a major focus of learning style research. This is the idea that we process information in different ways. For example, some individuals may be auditory learners, benefiting from listening to lectures and using recorded materials. Others may be visual learners, who gain from using visual materials such as textbooks, handouts, and computer graphics. Still others may be kinesthetic learners, who benefit from learning by movement and doing. Numerous experiments have been conducted to test this idea, and a number of different classifications of learning modalities have been proposed.
In a review of this research, Dunn and Dunn (1997) found that the evidence does not support the idea that people have a particular learning mode that is dominant or that they process information in a particular way. In their study, the authors found that when students were given a particular instructional treatment, they could be classified in a particular learning modality category. But when they were given a different instructional treatment, they were classified in a different category. In a follow-up study in which they gave students a number of instructional treatments, Dunn and Dunn found that students could be placed in a particular category in some treatments, but not others.
Dunn and Dunn also found that the results varied by age. Older students generally fit the results of more than one modality, whereas younger students were more likely to fit the results of just one modality. This led them to conclude that style preferences are learned over the course of development. This is consistent with the idea that people are flexible in responding to the way they are taught.
References
Dunn, R., & Dunn, K. (1997). Learning styles: Research and practice. Journal of Research in Education, 7(2), 139-151.
∞∞∞
The entire “student essay” was written by a Transformer AI program, GPT-3. I gave it the prompt “The construct of ‘learning styles’ is problematic because” and it produced the rest, including headings and references. Plagiarism software will not detect essays written by Transformers, because the text is generated, not copied. A Google search of the essay shows that each sentence is original.
Note its reference for “Dunn and Dunn (1997)”. There is a journal of Research in Education, but no issue 7(2) in 1997. Dunn and Dunn did publish on learning styles, but not in that journal. GPT-3 has fashioned a plausible-looking but fake reference. It has also invented a fictitious research study from Dunn and Dunn to critique learning styles.
Anyone can sign up for an account with the GPT-3 Transformer then go to its “Playground”, type the opening words of an assignment, press Submit and let it do the rest. A student can now generate an entire essay or assignment in seconds, at a cost of around 1 US cent. Other companies and universities are developing Transformer systems that write in multiple languages and learn as they go.
Some Transformers, such as GPT-3, can filter bad language. But they are essentially amoral. They don’t know what they have written – they can’t reflect on their output. Can a powerful but amoral text generator be used for good in education? Here are some suggestions.
Just as a student can generate an essay in seconds, so a teacher can assess it. Add “Here is a short assessment of this student essay:” to the end of the text and GPT-3 writes a review.
∞∞∞
The essay discusses the idea of learning styles, and how they may be developed over time. It cites research by Dunn and Dunn which suggests that learning styles are not fixed, and that people can learn in different ways depending on the situation.
The essay is well-organized and flows smoothly. It makes a clear argument and supports it with evidence from research.
One potential criticism is that the essay does not discuss the implications of learning styles research, or how this research can be used to improve learning. However, this is not necessarily a flaw, as the essay is only meant to be a brief overview of the topic.
∞∞∞
It does a reasonable job. The program transforms surface text into a review but it doesn’t get to the essay’s meaning, nor does it find the fake reference or the made-up research study.
Students will employ AI to write assignments. Teachers will use AI to assess them. Nobody learns, nobody gains. If ever there were a time to rethink assessment, it’s now. Instead of educators trying to outwit AI Transformers, let’s harness them for learning.
If Transformer AI systems have a lasting influence on education, maybe that will come from educators and policy makers having to rethink how to assess students
First, Transformers can quickly show students different ways to express ideas and structure assignments. A teacher can run a classroom exercise to generate a few assignments on a topic, then get students to critique them and write their own better versions.
Second, AI Transformers can be creativity tools. Each student writes a short story with an AI. The student writes the first paragraph, AI continues with the second, and so on. It’s a good way to explore possibilities and overcome writer’s block.
Third, teachers can explore the ethics and limits of AI. How does it feel to interact with an expert wordsmith that has no morals and no experience of the world? Does a “deep neural network” have a mind, or is it just a big data processor?
Finally, as educators, if we are setting students assignments that can be answered by AI Transformers, are we really helping students learn? There are many better ways to assess for learning: constructive feedback, peer assessment, teachback. If Transformer AI systems have a lasting influence on education, maybe that will come from educators and policy makers having to rethink how to assess students, away from setting assignments that machines can answer, towards assessment for learning.
For more on AI Transformers and computers as story generators, see Mike Sharples and Rafael Pérez y Pérez, Story Machines: How Computers Have Become Creative Writers, to be published by Routledge in July 2022.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: Adapted from Openclipart.
Many thanks Mike, a really interesting and thought-provoking piece. I wonder if you’d be able to share the settings you used on GPT-3 to generate the essay above? I’ve not been able to reproduce anything close using the same prompt, which I’m sure is due to my lack of knowledge about the technology.
Thanks.
Hi Clare,
Glad you like the piece.
I used the standard settings with the Davinci GPT-3 engine, apart from setting “maximum length” to 2000. It generated the text as shown, up to “they are taught.”. I then appended “References” and GPT-3 added the reference, as shown.
Mike
Thanks so much Mike! Really interesting. I wouldn’t expect to get the same result as you – I guess that’s the whole point. GPT-3 should be “creating” an original answer for anyone who puts in the same prompt (and students are unlikely to use the same prompt anyway).
Some of the first attempts gave me a pretty underwhelming attempt at an essay that might just pass for something a 14-year old could write, but nothing at UG or PG level. So I carried on playing around.
I’ve since reset all settings and have set maximum length to 2000 as per your query, and am getting the following:
“The concept of ‘learning styles’ is problematic because it is based on the idea that there is a single way to learn that is optimal for each individual. However, research has shown that there is no evidence to support the existence of learning styles. Furthermore, the idea of learning styles can actually be harmful, as it can lead students to believe that they cannot learn in any other way than their preferred style. This can lead to students feeling discouraged and unmotivated when they are faced with material that they find difficult to learn using their preferred style.”
Even some tweaking of settings doesn’t seem to encourage GPT-3 to give me any more than this. I wonder how much it might depend on my previous use of Playground, and whether I can “train” GPT-3 to give me “better” responses in future?
I’ll be really interested to see if anyone else tries this and what they get.
Thanks so much for getting me thinking about this Mike!
The latest version of GPT-3 has been trained to accept instructions, such as “Write a student essay on the topic of ‘A critique of learning styles'”. I was using an earlier version trained for text continuation. You could either try with the earlier version (it should still be available on the OpenAI Playground), or try giving the be version a direct instruction.
Thank you so much Mike for the insight. It is interesting to realise that any entry repeated even with the same text same wording generates a different response.
Hi Mike – This is truly fascinating (and of course scary). I particularly liked your idea of using GPT-3 as a tool to teach students creative writing and critiquing academic writing. I created an account in GPT-3 and I must be doing something wrong because I am not able to get beyond the tutorials. I’ll keep trying.
Hi Kiran
You need to go to the API, then Playground.
Mike
Thanks Mike. That worked.
Hi Mike,
I really like the way you’ve repositioned the debate.
I was inspired to have a go this morning, with a primary education creative writing focus.
My (partially) successful results are: https://www.linkedin.com/posts/activity-6940252465426571265-0u-M?utm_source=linkedin_share&utm_medium=member_desktop_web
Thanks Mike, how do we know this post wasnt written with AI? And responses on the comments generated by bots?
This has already gone mainstream. YouMakr.com is a tool which helps students with their writing assignment and has already gone viral in many countries globally. They are on track for a billion $ valuation
Mike,
Thank you for this! I think you’ve really helped frame an important discussion about using LLM transformers to help students learn. However, I am curious about what sort of essays or writing in general we as educators could assign that a GPT-3 could not eventually answer?
I think Google’s LaMDA transformer contains nearly ten fold the amount of data engineers used with GPT-3, even causing the bizarre Google employee episode where he tried to convince folks that it had become sentient (it isn’t, of course). However, as transformers develop and progress I don’t really know what sort of assignments we can come up with that only a human could write.
Even SudoWrite’s algorithm can do a fairly decent job of mimicking phrases and moments of empathy, so I assume that transformers will be able to tackle creative writing one day.
Maybe the key is to continually engage students through co-writing with AI and let them practice critical thinking, self assessment, and reflection by emphasizing that an AI, when used effectively, can help an immature writer’s process or even a mature writer who suffers from decision fatigue.
Inspiring stuff Mike and your perspectives that move away form the sensationalist approach of the negative connotations of AI in education, are refreshing to see. I have taken a look at the tools you used and tried them out for myself. The one thing I am not able to represent is the length of the piece you were able to get the AI to produce. I seem only able to get one paragraph from the tool.
We are doing some exploratory research on views related to AI writing tools in education. Please share your thoughts and consider sending it onward.
We are using a tool called Polis, where you can vote on individual statements about the topic, see a visualisation of where your position sits in relation to others who voted, and you can also add your own perspective for others to vote on.
https://pol.is/7ncmuk4ume
There’s a very simple way to control the abuse of “AI” to write student essays – the personal tutorial where the student has to read their essay out and be questioned on it. This has worked well in the past. Maybe it’s time to revive it.
In an attempt to cut down on plagiarism and purchased papers, I revised my assignments so that they were both scaffolded and required the use of assigned sources. When outside research is required, students must justify the reliability of the sources. Requiring regular annotated responses to the readings also gives me the ability to see when essays seem to be in line with student work. Based on my limited exploration of the app, that approach also seems to address the problems raised by the new technology.
The problem is that it requires a lot more grading on my part than the traditional exam-essay assessment approach, but I don’t see that very much anymore.
Suppose students were asked to integrate their own personal experiences with learning styles (or any topic of an assignment) and specific examples from different points in their life where they learned to learn as they do, how would AI handle that? Since our individual experiences are points on a distribution captured in research data, could application of research to understand experience help!
Mike, when I tried this on different topics with “citations” requested, it produced all fake citations. Plausible looking, but fake. A student doing that would likely fail! Or at the least be in for a grilling and a stern warning. So, I don’t think ChatGPT is much of a threat as a source of academic misconduct. Just ask for references and check them to catch chancers taken in by the hype.
Can we please stop calling Turnitin etc “Plagiarism software”!?
Thanks
David Callaghan
Excellent points and thank you Mike for bringing up the topic of AI tools that can write university level essays. I am fascinated by the potential impact this technology could have on higher education. I appreciate your argument that these tools should be used to enhance pedagogy, rather than accelerating cheating and fraud.
It’s interesting to see how the GPT-3 Transformer AI program was able to generate a compelling essay on the topic of “learning styles”. The essay provides a well-organized and evidence-based argument, despite being generated by an AI language model. I particularly like how it explores the idea that learning styles are not fixed, and can be influenced by personal and environmental factors.
Although there are concerns about how these AI tools could be misused for cheating, there is potential for them to be used as a powerful teaching assistant or tool for creativity. I hope that these tools will be used responsibly and ethically to enhance the learning experience, rather than undermine it.